text
stringlengths
11
320k
source
stringlengths
26
161
Mayer's reagent is an alkaloidal precipitating reagent used for the detection of alkaloids in natural products. Mayer's reagent is freshly prepared by dissolving a mixture of mercuric chloride (1.36 g) and of potassium iodide (5.00 g) in water (100.0 ml). [ 1 ] [ 2 ] Most alkaloids are precipitated from neutral or slightly acidic solution by Mayer's reagent ( potassiomercuric iodide solution) to give a cream coloured precipitate. This test was invented by and named after the German chemist Julius Robert Von Mayer (1814–1878). This article about an alkaloid is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mayer's_reagent
Maynard Victor Olson is an American chemist and molecular biologist. As a professor of genome sciences and medicine at the University of Washington , he became a specialist in the genetics of cystic fibrosis , and one of the founders of the Human Genome Project . During his years at Washington University in St. Louis , he also led efforts to develop yeast artificial chromosomes that allowed for the study of large portions of the human genome. Olson was born and raised in Bethesda, Maryland , where he was educated through their public school system. [ 1 ] Upon graduating from high school, he received his undergraduate degree from California Institute of Technology (Caltech) and his doctoral degree in inorganic chemistry from Stanford University in 1970. [ 2 ] During his time at Caltech, he attended lectures by Richard Feynman which he said was a "memorable experience." [ 3 ] Upon graduating with his PhD, Olson worked at Dartmouth College as an inorganic chemist but experienced "an early mid-life crisis" and chose to change fields. Olsen decided to begin work on genomics in the 1970’s, after reading Molecular Biology of the Gene , by James Watson . [ 4 ] He subsequently took a sabbatical and worked with Benjamin Hall at the University of Washington (UW) in Seattle. [ 5 ] In 1979, he accepted a position at Washington University in St. Louis , where he began to work on the development of systematic approaches to the analysis of complex genomes. [ 2 ] Throughout the 1980s, Olson continued to analyze whole genomes in his own laboratory. He worked with a computer developing algorithms for parallel genome mapping projects through yeast while John Sulston focused on nematode worm, Caenorhabditis elegans. [ 5 ] This was one of the first uses of restriction fragment length polymorphisms to map a cloned gene. [ 3 ] In 1989, Olson became a member of the Program Advisory Committee on the Human Genome at the National Institutes of Health (NIH). [ 6 ] At the beginning of the following decade, Olson was the recipient of the 1992 Genetics Society of America Medal for his genetic achievements. [ 3 ] During the same year, he returned to UW and joined their Department of Molecular Biotechnology. [ 7 ] In 1994, Olson was elected to the United States National Academy of Sciences . [ 8 ] At the turn of the century, Olson was a lead scientist on a study mapping the genome of Pseudomonas aeruginosa which was published in Nature . The implications of this study led to the possibility of new treatments for patients with cystic fibrosis and patients with severe burns and others who develop this type of infection. [ 9 ] [ 10 ] As a result of his pioneering gene research, he was the co-recipient of Durham, North Carolina 's City of Medicine Award. The citation specifically noted that without his discovery, sequencing the human genome "would not have been possible." [ 11 ] In 2003, Olson was elected to the American Academy of Arts and Sciences for "developing technological and experimental innovations critical to genome sequencing." [ 12 ] He was elected to the American Philosophical Society in 2005. [ 13 ] He was also the recipient of the 2002 Canada Gairdner International Award [ 2 ] and 2007 Gruber Prize in Genetics . [ 7 ] Olson retired from his position in 2008. [ 8 ]
https://en.wikipedia.org/wiki/Maynard_Olson
Maynard operation sequence technique (MOST) [ 1 ] is a predetermined motion time system that is used primarily in industrial settings to set the standard time in which a worker should perform a task. To calculate this, a task is broken down into individual motion elements, and each is assigned a numerical time value in units known as time measurement units, or TMUs, where 100,000 TMUs is equivalent to one hour. All the motion element times are then added together and any allowances are added, and the result is the standard time. It is more common in Asia whereas the original and more sophisticated Methods Time Measurement technique, better known as MTM, is a global standard. The most commonly used form of MOST is BasicMOST, which was released in Sweden in 1972 and in the United States in 1974. Two other variations were released in 1980, called MiniMOST and MaxiMOST. The difference between the three is their level of focus—the motions recorded in BasicMOST are on the level of tens of TMUs, while MiniMOST uses individual TMUs and MaxiMOST uses hundreds of TMUs. This allows for a variety of applications—MiniMOST is commonly used for short (less than about a minute), repetitive cycles, and MaxiMOST for longer (more than several minutes), non-repetitive operations. BasicMost is in the position between them, and can be used accurately for operations ranging from less than a minute to about ten minutes. Another variation of MOST is known as AdminMOST. Originally developed and released under the name ClericalMOST in the 1970s, it was recently updated to include modern administrative tasks and renamed. It is on the same level of focus as BasicMOST. Up until 16bit programs stopped working with Windows, it was possible to use AutoMOST. AutoMOST was a knowledge based system employing decision trees. Developers created logic trees. These trees could then be used by non IE trained operators to generate Standard Times. The user answered a series of logic questions to route the logic and made inputs (number of parts fitted etc.). As they made their way through the tree, based on their route and inputs, AutoMOST would be gathering sub operation data to collate into the final time for the activity being measured. AutoMOST was able to pull in sub operation data from any of the base versions of MOST (Mini, Maxi or Basic)
https://en.wikipedia.org/wiki/Maynard_operation_sequence_technique
The Mayor's Award for Excellence in Science and Technology is given annually to recognise important members of the science and engineering communities in New York City . Candidates must live or work in the city. [ 1 ] Nominations are submitted in five categories: [ 2 ] The Mayor chooses winners from a list of finalists submitted by the New York Academy of Sciences and the New York City Department of Cultural Affairs . [ 3 ] This science awards article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mayor's_Award_for_Excellence_in_Science_and_Technology
The Mayo–Lewis equation or copolymer equation in polymer chemistry describes the distribution of monomers in a copolymer . It was proposed by Frank R. Mayo and Frederick M. Lewis . [ 1 ] The equation considers a monomer mix of two components M 1 {\displaystyle M_{1}\,} and M 2 {\displaystyle M_{2}\,} and the four different reactions that can take place at the reactive chain end terminating in either monomer ( M 1 ∗ {\displaystyle M_{1}^{*}\,} and M 2 ∗ {\displaystyle M_{2}^{*}\,} ) with their reaction rate constants k {\displaystyle k\,} : The reactivity ratio for each propagating chain end is defined as the ratio of the rate constant for addition of a monomer of the species already at the chain end to the rate constant for addition of the other monomer. [ 2 ] The copolymer equation is then: [ 3 ] [ 4 ] [ 2 ] with the concentrations of the components in square brackets. The equation gives the relative instantaneous rates of incorporation of the two monomers. [ 4 ] Monomer 1 is consumed with reaction rate : [ 5 ] − d [ M 1 ] d t = k 11 [ M 1 ] ∑ [ M 1 ∗ ] + k 21 [ M 1 ] ∑ [ M 2 ∗ ] {\displaystyle {\frac {-d[M_{1}]}{dt}}=k_{11}[M_{1}]\sum [M_{1}^{*}]+k_{21}[M_{1}]\sum [M_{2}^{*}]\,} with ∑ [ M 1 ∗ ] {\displaystyle \sum [M_{1}^{*}]} the concentration of all the active chains terminating in monomer 1, summed over chain lengths. ∑ [ M 2 ∗ ] {\displaystyle \sum [M_{2}^{*}]} is defined similarly for monomer 2. Likewise the rate of disappearance for monomer 2 is: − d [ M 2 ] d t = k 12 [ M 2 ] ∑ [ M 1 ∗ ] + k 22 [ M 2 ] ∑ [ M 2 ∗ ] {\displaystyle {\frac {-d[M_{2}]}{dt}}=k_{12}[M_{2}]\sum [M_{1}^{*}]+k_{22}[M_{2}]\sum [M_{2}^{*}]\,} Division of both equations by ∑ [ M 2 ∗ ] {\displaystyle \sum [M_{2}^{*}]\,} followed by division of the first equation by the second yields: d [ M 1 ] d [ M 2 ] = [ M 1 ] [ M 2 ] ( k 11 ∑ [ M 1 ∗ ] ∑ [ M 2 ∗ ] + k 21 k 12 ∑ [ M 1 ∗ ] ∑ [ M 2 ∗ ] + k 22 ) {\displaystyle {\frac {d[M_{1}]}{d[M_{2}]}}={\frac {[M_{1}]}{[M_{2}]}}\left({\frac {k_{11}{\frac {\sum [M_{1}^{*}]}{\sum [M_{2}^{*}]}}+k_{21}}{k_{12}{\frac {\sum [M_{1}^{*}]}{\sum [M_{2}^{*}]}}+k_{22}}}\right)\,} The ratio of active center concentrations can be found using the steady state approximation, meaning that the concentration of each type of active center remains constant. d ∑ [ M 1 ∗ ] d t = d ∑ [ M 2 ∗ ] d t ≈ 0 {\displaystyle {\frac {d\sum [M_{1}^{*}]}{dt}}={\frac {d\sum [M_{2}^{*}]}{dt}}\approx 0\,} The rate of formation of active centers of monomer 1 ( M 2 ∗ + M 1 → k 21 M 2 M 1 ∗ {\displaystyle M_{2}^{*}+M_{1}{\xrightarrow {k_{21}}}M_{2}M_{1}^{*}\,} ) is equal to the rate of their destruction ( M 1 ∗ + M 2 → k 12 M 1 M 2 ∗ {\displaystyle M_{1}^{*}+M_{2}{\xrightarrow {k_{12}}}M_{1}M_{2}^{*}\,} ) so that k 21 [ M 1 ] ∑ [ M 2 ∗ ] = k 12 [ M 2 ] ∑ [ M 1 ∗ ] {\displaystyle k_{21}[M_{1}]\sum [M_{2}^{*}]=k_{12}[M_{2}]\sum [M_{1}^{*}]\,} or ∑ [ M 1 ∗ ] ∑ [ M 2 ∗ ] = k 21 [ M 1 ] k 12 [ M 2 ] {\displaystyle {\frac {\sum [M_{1}^{*}]}{\sum [M_{2}^{*}]}}={\frac {k_{21}[M_{1}]}{k_{12}[M_{2}]}}\,} Substituting into the ratio of monomer consumption rates yields the Mayo–Lewis equation after rearrangement: [ 4 ] d [ M 1 ] d [ M 2 ] = [ M 1 ] [ M 2 ] ( k 11 k 21 [ M 1 ] k 12 [ M 2 ] + k 21 k 12 k 21 [ M 1 ] k 12 [ M 2 ] + k 22 ) = [ M 1 ] [ M 2 ] ( k 11 [ M 1 ] k 12 [ M 2 ] + 1 [ M 1 ] [ M 2 ] + k 22 k 21 ) = [ M 1 ] [ M 2 ] ( r 1 [ M 1 ] + [ M 2 ] ) ( [ M 1 ] + r 2 [ M 2 ] ) {\displaystyle {\frac {d[M_{1}]}{d[M_{2}]}}={\frac {[M_{1}]}{[M_{2}]}}\left({\frac {k_{11}{\frac {k_{21}[M_{1}]}{k_{12}[M_{2}]}}+k_{21}}{k_{12}{\frac {k_{21}[M_{1}]}{k_{12}[M_{2}]}}+k_{22}}}\right)={\frac {[M_{1}]}{[M_{2}]}}\left({\frac {{\frac {k_{11}[M_{1}]}{k_{12}[M_{2}]}}+1}{{\frac {[M_{1}]}{[M_{2}]}}+{\frac {k_{22}}{k_{21}}}}}\right)={\frac {[M_{1}]}{[M_{2}]}}{\frac {\left(r_{1}\left[M_{1}\right]+\left[M_{2}\right]\right)}{\left(\left[M_{1}\right]+r_{2}\left[M_{2}\right]\right)}}} It is often useful to alter the copolymer equation by expressing concentrations in terms of mole fractions . Mole fractions of monomers M 1 {\displaystyle M_{1}\,} and M 2 {\displaystyle M_{2}\,} in the feed are defined as f 1 {\displaystyle f_{1}\,} and f 2 {\displaystyle f_{2}\,} where f 1 = 1 − f 2 = M 1 ( M 1 + M 2 ) {\displaystyle f_{1}=1-f_{2}={\frac {M_{1}}{(M_{1}+M_{2})}}\,} Similarly, F {\displaystyle F\,} represents the mole fraction of each monomer in the copolymer: F 1 = 1 − F 2 = d M 1 d ( M 1 + M 2 ) {\displaystyle F_{1}=1-F_{2}={\frac {dM_{1}}{d(M_{1}+M_{2})}}\,} These equations can be combined with the Mayo–Lewis equation to give [ 6 ] [ 4 ] F 1 = 1 − F 2 = r 1 f 1 2 + f 1 f 2 r 1 f 1 2 + 2 f 1 f 2 + r 2 f 2 2 {\displaystyle F_{1}=1-F_{2}={\frac {r_{1}f_{1}^{2}+f_{1}f_{2}}{r_{1}f_{1}^{2}+2f_{1}f_{2}+r_{2}f_{2}^{2}}}\,} This equation gives the composition of copolymer formed at each instant. However the feed and copolymer compositions can change as polymerization proceeds. Reactivity ratios indicate preference for propagation. Large r 1 {\displaystyle r_{1}\,} indicates a tendency for M 1 ∗ {\displaystyle M_{1}^{*}\,} to add M 1 {\displaystyle M_{1}\,} , while small r 1 {\displaystyle r_{1}\,} corresponds to a tendency for M 1 ∗ {\displaystyle M_{1}^{*}\,} to add M 2 {\displaystyle M_{2}\,} . Values of r 2 {\displaystyle r_{2}\,} describe the tendency of M 2 ∗ {\displaystyle M_{2}^{*}\,} to add M 2 {\displaystyle M_{2}\,} or M 1 {\displaystyle M_{1}\,} . From the definition of reactivity ratios, several special cases can be derived: Calculation of reactivity ratios generally involves carrying out several polymerizations at varying monomer ratios. The copolymer composition can be analysed with methods such as Proton nuclear magnetic resonance , Carbon-13 nuclear magnetic resonance , or Fourier transform infrared spectroscopy . The polymerizations are also carried out at low conversions, so monomer concentrations can be assumed to be constant. With all the other parameters in the copolymer equation known, r 1 {\displaystyle r_{1}\,} and r 2 {\displaystyle r_{2}\,} can be found. One of the simplest methods for finding reactivity ratios is plotting the copolymer equation and using nonlinear least squares analysis to find the r 1 {\displaystyle r_{1}\,} , r 2 {\displaystyle r_{2}\,} pair that gives the best fit curve. This is preferred as methods such as Kelen-Tüdős or Fineman-Ross (see below) that involve linearization of the Mayo–Lewis equation will introduce bias to the results. [ 10 ] The Mayo-Lewis method uses a form of the copolymer equation relating r 1 {\displaystyle r_{1}\,} to r 2 {\displaystyle r_{2}\,} : [ 1 ] r 2 = f 1 f 2 [ F 2 F 1 ( 1 + f 1 r 1 f 2 ) − 1 ] {\displaystyle r_{2}={\frac {f_{1}}{f_{2}}}\left[{\frac {F_{2}}{F_{1}}}(1+{\frac {f_{1}r_{1}}{f_{2}}})-1\right]\,} For each different monomer composition, a line is generated using arbitrary r 1 {\displaystyle r_{1}\,} values. The intersection of these lines is the r 1 {\displaystyle r_{1}\,} , r 2 {\displaystyle r_{2}\,} for the system. More frequently, the lines do not intersect in a single point and the area in which most lines intersect can be given as a range of r 1 {\displaystyle r_{1}\,} , and r 2 {\displaystyle r_{2}\,} values. Fineman and Ross rearranged the copolymer equation into a linear form: [ 11 ] G = H r 1 − r 2 {\displaystyle G=Hr_{1}-r_{2}\,} where G = f 1 ( 2 F 1 − 1 ) ( 1 − f 1 ) F 1 {\displaystyle G={\frac {f_{1}(2F_{1}-1)}{(1-f_{1})F_{1}}}\,} and H = f 1 2 ( 1 − F 1 ) ( 1 − f 1 ) 2 F 1 {\displaystyle H={\frac {f_{1}^{2}(1-F_{1})}{(1-f_{1})^{2}F_{1}}}\ } Thus, a plot of H {\displaystyle H\,} versus G {\displaystyle G\,} yields a straight line with slope r 1 {\displaystyle r_{1}\,} and intercept − r 2 {\displaystyle -r_{2}\,} The Fineman-Ross method can be biased towards points at low or high monomer concentration, so Kelen and Tüdős introduced an arbitrary constant, α = ( H m i n H m a x ) 0.5 {\displaystyle \alpha =(H_{min}H_{max})^{0.5}\,} where H m i n {\displaystyle H_{min}\,} and H m a x {\displaystyle H_{max}\,} are the highest and lowest values of H {\displaystyle H\,} from the Fineman-Ross method. [ 12 ] The data can be plotted in a linear form η = [ r 1 + r 2 α ] μ − r 2 α {\displaystyle \eta =\left[r_{1}+{\frac {r_{2}}{\alpha }}\right]\mu -{\frac {r_{2}}{\alpha }}\,} where η = G / ( α + H ) {\displaystyle \eta =G/(\alpha +H)\,} and μ = H / ( α + H ) {\displaystyle \mu =H/(\alpha +H)\,} . Plotting η {\displaystyle \eta } against μ {\displaystyle \mu } yields a straight line that gives − r 2 / α {\displaystyle -r_{2}/\alpha } when μ = 0 {\displaystyle \mu =0} and r 1 {\displaystyle r_{1}} when μ = 1 {\displaystyle \mu =1} . This distributes the data more symmetrically and can yield better results. A semi-empirical method for the prediction of reactivity ratios is called the Q-e scheme which was proposed by Alfrey and Price in 1947. [ 13 ] This involves using two parameters for each monomer, Q {\displaystyle Q} and e {\displaystyle e} . The reaction of M 1 {\displaystyle M_{1}} radical with M 2 {\displaystyle M_{2}} monomer is written as k 12 = P 1 Q 2 e x p ( − e 1 e 2 ) {\displaystyle k_{12}=P_{1}Q_{2}exp(-e_{1}e_{2})} while the reaction of M 1 {\displaystyle M_{1}} radical with M 1 {\displaystyle M_{1}} monomer is written as k 11 = P 1 Q 1 e x p ( − e 1 e 1 ) {\displaystyle k_{11}=P_{1}Q_{1}exp(-e_{1}e_{1})} Where P is a proportionality constant, Q is the measure of reactivity of monomer via resonance stabilization, and e is the measure of polarity of monomer (molecule or radical) via the effect of functional groups on vinyl groups. Using these definitions, r 1 {\displaystyle r_{1}} and r 2 {\displaystyle r_{2}} can be found by the ratio of the terms. An advantage of this system is that reactivity ratios can be found using tabulated Q-e values of monomers regardless of what the monomer pair is in the system.
https://en.wikipedia.org/wiki/Mayo–Lewis_equation
Maytal Caspary Toroker is an associate professor in the Department of Materials Science and Engineering at Technion-Israel Institute of Technology , Haifa , Israel . [ 1 ] She is recognized for her significant contributions in the field of computational materials science, particularly in its applications to catalysis, charge transport, and energy conversion devices. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] Maytal was born in Israel in a Jewish family. She completed her BA degree in molecular biochemistry (2004) from the Department of Chemistry at Technion . [ 1 ] Later, she pursued her direct Ph.D. (2009) from the same department. [ 10 ] After receiving her Ph.D., she started working with Prof. Emily A. Carter at Princeton University [ 11 ] during the period of 2010–13, funded by the Marie Curie International Outgoing Fellowship from the European Union. In 2013, she joined the Department of Materials Science and Engineering at Technion-Israel Institute of Technology , Haifa , as an assistant professor, and she is currently working as an associate professor. Her research at Technion mainly involves development of density functional theory (DFT) and its application. She has extensively worked on developing a method for charge transport calculation through heterostructures using wave propagation method. [ 12 ] [ 13 ] [ 8 ] Her research on doped NiOOH (Nickel oxy-hydroxide) has explained the reason for the material's remarkable success in the oxygen evolution reaction. Her results show that it is the iron's ability to change its oxidation state that facilitates oxygen evolution. [ 14 ] [ 15 ] This work was published in the journal Physical Chemistry Chemical Physics and was featured on the front cover of Issue 11, 2017. [ 15 ] She has also conducted research on transition metal oxides for their applications as photocatalysts and photoelectrodes. [ 16 ] [ 17 ] [ 18 ] She developed a method to calculate the band edge positions using first principles quantum mechanics calculations. [ 9 ] These band edge positions play a crucial role in determining the suitability of these materials for various applications. Apart from this, she also works on metal organic frameworks (MOFs) and covalent organic frameworks (COFs) for their application in photocatalysis and electrocatalysis. [ 3 ] [ 2 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ] Currently she is the chair of European Cooperation in Science and Technology (COST) action on Computational materials sciences for efficient water splitting with nanocrystals from abundant elements [ 23 ] . She is in the editorial advisory board of the journal Advanced Theory and Simulation, Wiley [ 24 ]
https://en.wikipedia.org/wiki/Maytal_Caspary_Toroker
In electronic design automation , maze runner is a connection routing method that represents the entire routing space as a grid. Parts of this grid are blocked by components, specialised areas, or already present wiring. The grid size corresponds to the wiring pitch of the area. The goal is to find a chain of grid cells that go from point A to point B. A maze runner may use the Lee algorithm . It uses a wave propagation style (a wave are all cells that can be reached in n steps) throughout the routing space. The wave stops when the target is reached, and the path is determined by backtracking through the cells. This electronics-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Maze_runner
Mazut ( Russian : Мазут , romanized : Mazut ) is a low-quality heavy fuel oil , used in power plants and similar applications in Iran and some countries of the former Soviet Union. In the West, through fluid catalytic cracking , mazut is distilled into diesel and other light distillates. [ 2 ] [ 3 ] Mazut may be used for heating houses in some parts of the former USSR and in countries of the Far East [ clarification needed ] that do not have the facilities to blend or break it down into more conventional petro-chemicals. Mazut is burned in Iran to compensate for the shortage of natural gas but it has caused environmental problems, such as huge amounts of air pollution in big cities such as Tehran . [ 4 ] Oil spills involving Mazut have unique harmful effects on marine environment as the fuel-oil solidifies at 25°C [ 5 ] and sinks to the ocean floor making it impossible to chemically remediate. [ 6 ] On the 15th December 2024, two oil-tankers crashed in the Kerch Strait resulting in 5,000 tonnes of Mazut entering the environment known as the 2024 Black Sea oil spill . A similar incident occurred in 2007 also in the Kerch Strait . Mazut-100 is a fuel oil that is manufactured to GOST specifications, for example, GOST 10585-75 (not active) or GOST 10585-2013 (active as per December 2019 [ 7 ] ). Mazut is almost exclusively [ citation needed ] manufactured in Russia , Kazakhstan , Azerbaijan , and Turkmenistan . [ 8 ] This product is typically used for larger boilers in producing steam, since the energy value is high. The most important factor when grading this fuel is the sulfur content, which can mostly be affected by the source feedstock. For shipment purposes, this product is considered a "dirty oil" product, and because viscosity drastically affects whether it is able to be pumped, shipping has unique requirements. Much like No. 6 fuel oil ( Bunker C ), mazut is a refinery residual product, that is, products left over after gasoline , diesel , and other light distillates are distilled from crude oil except, unlike bunker fuel, mazut is produced from much lower grade feedstocks. The main difference between the different types of Mazut-100 is the content of sulphur. The grades are represented by these sulfuric levels: [ 9 ] Very-low-sulphur mazut is generally made from the lowest-sulfur crude feedstocks. It has a very limited volume to be exported because: Low- to high-sulfur mazut is available from Russia and other CIS countries (Kazakhstan, Azerbaijan, Turkmenistan). The technical specifications are represented in the same way, according to the Russian GOST 10585-99. The Russian origin mazut demands higher prices. [ citation needed ]
https://en.wikipedia.org/wiki/Mazut
The McCabe–Thiele method is a technique that is commonly employed in the field of chemical engineering to model the separation of two substances by a distillation column . [ 1 ] [ 2 ] [ 3 ] It uses the fact that the composition at each theoretical tray is completely determined by the mole fraction of one of the two components. This method is based on the assumptions that the distillation column is isobaric —i.e the pressure remains constant—and that the flow rates of liquid and vapor do not change throughout the column (i.e., constant molar overflow). The assumption of constant molar overflow requires that: The method was first published by Warren L. McCabe and Ernest Thiele in 1925, [ 4 ] both of whom were working at the Massachusetts Institute of Technology (MIT) at the time. A McCabe–Thiele diagram for the distillation of a binary (two-component) feed is constructed using the vapor-liquid equilibrium (VLE) data—which is how vapor is concentrated when in contact with its liquid form—for the component with the lower boiling point. On a planar graph, both axes represent the mole fractions of the lighter (lower boiling) component; the horizontal (x) and vertical (y) axes represents the liquid and vapor phase compositions, respectively. The x = y line (see Figure 1) represents the scenarios where the compositions of liquid and vapor are the same. The vapor-liquid equilibrium line (the curved line from (0,0) to (1,1) in Figure 1) represents the vapor phase composition for a given liquid phase composition at equilibrium. Vertical lines drawn from the horizontal axis up to the x = y line indicate the composition of the inlet feed stream, the composition of the top (distillate) product stream, and the composition of the bottoms product (shown in red in Figure 1). The rectifying section operating line for the section above the inlet feed stream of the distillation column (shown in green in Figure 1) starts at the intersection of the distillate composition line and the x = y line and continues at a downward slope of L / (D + L), where L is the molar flow rate of reflux and D is the molar flow rate of the distillate product, until it intersects the q-line. The stripping section operating line for the section below the feed inlet (shown in magenta in Figure 1) starts at the intersection of the red bottoms composition line and the x = y line and continues up to the point where the blue q-line intersects the green rectifying section operating line. The q-line (depicted in blue in Figure 1) intersects the point of intersection of the feed composition line and the x = y line and has a slope of q / (q - 1), where the parameter q denotes mole fraction of liquid in the feed. For example, if the feed is a saturated liquid , q = 1 and the slope of the q-line is infinite (drawn as a vertical line). As another example, if the feed is saturated vapor , q = 0 and the slope of the q-line is 0 (a horizontal line). [ 2 ] The typical McCabe–Thiele diagram in Figure 1 uses a q-line representing a partially vaporized feed. Example q-line slopes are presented in Figure 2. The number of steps between the operating lines and the equilibrium line represents the number of theoretical plates (or equilibrium stages) required for the distillation. For the binary distillation depicted in Figure 1, the required number of theoretical plates is 6. Constructing a McCabe–Thiele diagram is not always straightforward. In continuous distillation with a varying reflux ratio, the mole fraction of the lighter component in the top part of the distillation column will decrease as the reflux ratio decreases. Each new reflux ratio will alter the gradient of the rectifying section curve. When the assumption of constant molar overflow is not valid, the operating lines will not be straight. Using mass and enthalpy balances in addition to vapor-liquid equilibrium data and enthalpy-concentration data, operating lines can be constructed using the Ponchon–Savarit method. [ 5 ] If the mixture can form an azeotrope , its vapor-liquid equilibrium line will cross the x = y line, preventing further separation no matter the number of theoretical plates.
https://en.wikipedia.org/wiki/McCabe–Thiele_method
In physical chemistry , the McConnell equation gives the probability of an unpaired electron in an in aromatic radical compound (such as benzene radical anion C 6 H 6 − {\displaystyle C_{6}H_{6}^{-}} ) being on a particular atom. It relates this probability, known as the "spin density", to its proportional dependence on the hyperfine splitting constant . The equation is a = Q ρ , {\displaystyle a=Q\rho ,} where a {\textstyle a} is the hyperfine splitting constant, ρ {\textstyle \rho } is the spin density, and Q {\textstyle Q} is an empirical constant that can range from 2.0 to 2.5 mT. The equation is named after Harden M. McConnell of Stanford University , who first presented it in 1956. [ 1 ]
https://en.wikipedia.org/wiki/McConnell_equation
The McCormack reaction is a method for the synthesis of organophosphorus compounds . In this reaction, a 1,3- diene and a source of R 2 P + are combined to give phospholenium cation. The reaction is named after W. B. McCormack, a research chemist at duPont. An illustrative reaction involves phenyldichlorophosphine and isoprene : [ 1 ] The reaction proceeds via a pericyclic [2+4]-process. The resulting derivatives can be hydrolyzed to give the phosphine oxide . Dehydrohalogenation gives the phosphole . [ 2 ]
https://en.wikipedia.org/wiki/McCormack_reaction
The McCumber relation (or McCumber theory) is a relationship between the effective cross-sections of absorption and emission of light in the physics of solid-state lasers . [ 1 ] [ 2 ] It is named after Dean McCumber , who proposed the relationship in 1964. Let σ a ( ω ) {\displaystyle \sigma _{\rm {a}}(\omega )} be the effective absorption cross-section σ e ( ω ) {\displaystyle \sigma _{\rm {e}}(\omega )} be effective emission cross-sections at frequency ω {\displaystyle \omega } , and let T {\displaystyle ~T~} be the effective temperature of the medium. The McCumber relation is where ( N 1 N 2 ) T {\displaystyle \left({\frac {N_{1}}{N_{2}}}\right)_{T}} is thermal steady-state ratio of populations; frequency ω z {\displaystyle \omega _{\rm {z}}} is called "zero-line" frequency; [ 3 ] [ 4 ] ℏ {\displaystyle \hbar } is the Planck constant and k B {\displaystyle k_{\rm {B}}} is the Boltzmann constant . Note that the right-hand side of Equation (1) does not depend on ω {\displaystyle ~\omega ~} . It is typical that the lasing properties of a medium are determined by the temperature and the population at the excited laser level, and are not sensitive to the method of excitation used to achieve it. In this case, the absorption cross-section σ a ( ω ) {\displaystyle \sigma _{\rm {a}}(\omega )} and the emission cross-section σ e ( ω ) {\displaystyle \sigma _{\rm {e}}(\omega )} at frequency ω {\displaystyle ~\omega ~} can be related to the lasers gain in such a way, that the gain at this frequency can be determined as follows: D.E.McCumber had postulated these properties and found that the emission and absorption cross-sections are not independent; [ 1 ] [ 2 ] they are related with Equation (1). In the case of an idealized two-level atom the detailed balance for the emission and absorption which preserves the Planck formula for the black-body radiation leads to equality of cross-section of absorption and emission. In the solid-state lasers the splitting of each of laser levels leads to the broadening which greatly exceeds the natural spectral linewidth . In the case of an ideal two-level atom, the product of the linewidth and the lifetime is of order of unity, which obeys the Heisenberg uncertainty principle . In solid-state laser materials, the linewidth is several orders of magnitude larger so the spectra of emission and absorption are determined by distribution of excitation among sublevels rather than by the shape of the spectral line of each individual transition between sublevels. This distribution is determined by the effective temperature within each of laser levels. The McCumber hypothesis is that the distribution of excitation among sublevels is thermal. The effective temperature determines the spectra of emission and absorption ( The effective temperature is called a temperature by scientists even if the excited medium as whole is pretty far from the thermal state ) Consider the set of active centers (fig.1.). Assume fast transition between sublevels within each level, and slow transition between levels. According to the McCumber hypothesis, the cross-sections σ a {\displaystyle \sigma _{\rm {a}}} and σ e {\displaystyle \sigma _{\rm {e}}} do not depend on the populations N 1 {\displaystyle N_{1}} and N 2 {\displaystyle N_{2}} . Therefore, we can deduce the relation, assuming the thermal state. Let v ( ω ) {\displaystyle ~v(\omega )~} be group velocity of light in the medium, the product n 2 σ e ( ω ) v ( ω ) D ( ω ) {\displaystyle ~n_{2}\sigma _{\rm {e}}(\omega )v(\omega )D(\omega )~} is spectral rate of stimulated emission , and n 1 σ a ( ω ) v ( ω ) D ( ω ) {\displaystyle ~n_{1}\sigma _{\rm {a}}(\omega )v(\omega )D(\omega )~} is that of absorption; a ( ω ) n 2 {\displaystyle a(\omega )n_{2}} is spectral rate of spontaneous emission . (Note that in this approximation, there is no such thing as a spontaneous absorption) The balance of photons gives: Which can be rewritten as The thermal distribution of density of photons follows from blackbody radiation [ 5 ] Both (4) and (5) hold for all frequencies ω {\displaystyle ~\omega ~} . For the case of idealized two-level active centers, σ a ( ω ) = σ e ( ω ) {\displaystyle ~\sigma _{\rm {a}}(\omega )=\sigma _{\rm {e}}(\omega )~} , and n 1 / n 2 = exp ( ℏ ω k B T ) {\displaystyle ~n_{1}/n_{2}=\exp \!\left({\frac {\hbar \omega }{k_{\rm {B}}T}}\right)} , which leads to the relation between the spectral rate of spontaneous emission a ( ω ) {\displaystyle a(\omega )} and the emission cross-section σ e ( ω ) {\displaystyle ~\sigma _{\rm {e}}(\omega )~} . [ 5 ] (We keep the term probability of emission for the quantity a ( ω ) d ω d t {\displaystyle ~a(\omega ){\rm {d}}\omega {\rm {d}}t~} , which is probability of emission of a photon within small spectral interval ( ω , ω + d ω ) {\displaystyle ~(\omega ,\omega +{\rm {d}}\omega )~} during a short time interval ( t , t + d t ) {\displaystyle ~(t,t+{\rm {d}}t)~} , assuming that at time t {\displaystyle ~t~} the atom is excited.) The relation (D2) is a fundamental property of spontaneous and stimulated emission, and perhaps the only way to prohibit a spontaneous break of the thermal equilibrium in the thermal state of excitations and photons. For each site number s {\displaystyle ~s~} , for each sublevel number j {\displaystyle j} , the partial spectral emission probability a s , j ( ω ) {\displaystyle ~a_{s,j}(\omega )~} can be expressed from consideration of idealized two-level atoms: [ 5 ] Neglecting the cooperative coherent effects, the emission is additive: for any concentration q s {\displaystyle ~q_{s}~} of sites and for any partial population n s , j {\displaystyle ~n_{s,j}~} of sublevels, the same proportionality between a {\displaystyle ~a~} and σ e {\displaystyle ~\sigma _{\rm {e}}~} holds for the effective cross-sections: Then, the comparison of (D1) and (D2) gives the relation This relation is equivalent of the McCumber relation (mc), if we define the zero-line frequency ω Z {\displaystyle \omega _{Z}} as solution of equation the subscript T {\displaystyle ~T~} indicates that the ratio of populations in evaluated in the thermal state. The zero-line frequency can be expressed as Then (n1n2) becomes equivalent of the McCumber relation (mc). No specific property of sublevels of active medium is required to keep the McCumber relation. It follows from the assumption about quick transfer of energy among excited laser levels and among lower laser levels. The McCumber relation (mc) has the same range of validity as the concept of the emission cross-section itself. The McCumber relation is confirmed for various media. [ 6 ] [ 7 ] In particular, relation (1) makes it possible to approximate two functions of frequency, emission and absorption cross sections, with single fit . [ 8 ] In 2006 the strong violation of McCumber relation was observed for Yb:Gd 2 SiO 5 and reported in 3 independent journals. [ 9 ] [ 10 ] [ 11 ] Typical behavior of the cross-sections reported is shown in Fig.2 with thick curves. The emission cross-section is practically zero at wavelength 975 nm; this property makes Yb:Gd 2 SiO 5 an excellent material for efficient solid-state lasers . However, the property reported (thick curves) is not compatible with the second law of thermodynamics . With such a material, the perpetual motion device would be possible. It would be sufficient to fill a box with reflecting walls with Yb:Gd 2 SiO 5 and allow it to exchange radiation with a black body through a spectrally-selective window which is transparent in vicinity of 975 nm and reflective at other wavelengths. Due to the lack of emissivity at 975 nm the medium should warm, breaking the thermal equilibrium. On the base of the second Law of thermodynamics, the experimental results [ 9 ] [ 10 ] [ 11 ] were refuted in 2007. With the McCumber theory, the correction was suggested for the effective emission cross section (black thin curve). [ 3 ] Then this correction was confirmed experimentally. [ 12 ]
https://en.wikipedia.org/wiki/McCumber_relation
The McCutcheon index or chemotactic ratio is a numerical metric that quantifies the efficiency of movement. It is calculated as the ratio of the net displacement of a moving entity to the total length of the path it has traveled. [ 1 ] [ 2 ] The index acts as an evaluative measure of the directness of movement. A value close to 1 indicates that a moving entity performed its movement in a very direct manner, minimizing detours. On the other hand, a lower value indicates that the entity has achieved only a marginal net displacement, despite traveling a considerable distance. The index is used to evaluate movements of, for example, leukocytes , bacteria , or amoebae . [ 3 ] It is named after Morton McCutcheon who introduced it to describe chemotaxis in leukocytes . This biophysics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/McCutcheon_index
The McDonald–Kreitman test [ 1 ] is a statistical test often used by evolutionary and population biologists to detect and measure the amount of adaptive evolution within a species by determining whether adaptive evolution has occurred, and the proportion of substitutions that resulted from positive selection (also known as directional selection ). To do this, the McDonald–Kreitman test compares the amount of variation within a species ( polymorphism ) to the divergence between species (substitutions) at two types of sites, neutral and nonneutral. A substitution refers to a nucleotide that is fixed within one species, but a different nucleotide is fixed within a second species at the same base pair of homologous DNA sequences. [ 2 ] A site is nonneutral if it is either advantageous or deleterious. [ 3 ] The two types of sites can be either synonymous or nonsynonymous within a protein-coding region. In a protein-coding sequence of DNA, a site is synonymous if a point mutation at that site would not change the amino acid, also known as a silent mutation . Because the mutation did not result in a change in the amino acid that was originally coded for by the protein-coding sequence, the phenotype, or the observable trait, of the organism is generally unchanged by the silent mutation. [ 4 ] A site in a protein-coding sequence of DNA is nonsynonymous if a point mutation at that site results in a change in the amino acid, resulting in a change in the organism's phenotype. [ 3 ] Typically, silent mutations in protein-coding regions are used as the "control" in the McDonald–Kreitman test. In 1991, John H. McDonald and Martin Kreitman derived the McDonald–Kreitman test while performing an experiment with Drosophila (fruit flies) and their differences in amino acid sequence of the alcohol dehydrogenase gene. McDonald and Kreitman proposed this method to estimate the proportion of substitutions that are fixed by positive selection rather than by genetic drift . [ 5 ] In order to set up the McDonald–Kreitman test, we must first set up a two-way contingency table of our data on the species being investigated as shown below: To quantify the values for D s , D n , P s , and P n , you count the number of differences in the protein-coding region for each type of variable in the contingency table. The null hypothesis of the McDonald–Kreitman test is that the ratio of nonsynonymous to synonymous variation within a species is going to equal the ratio of nonsynonymous to synonymous variation between species (i.e. D n / D s = P n / P s ). When positive or negative selection (natural selection) influences nonsynonymous variation, the ratios will no longer equal. The ratio of nonsynonymous to synonymous variation between species is going to be lower than the ratio of nonsynonymous to synonymous variation within species (i.e. D n / D s < P n / P s ) when negative selection is at work, and deleterious mutations strongly affect polymorphism. The ratio of nonsynonymous to synonymous variation within species is lower than the ratio of nonsynonymous to synonymous variation between species (i.e. D n / D s > P n / P s ) when we observe positive selection. Since mutations under positive selection spread through a population rapidly, they don't contribute to polymorphism but do have an effect on divergence. [ 6 ] Using an equation derived by Smith and Eyre-Walker , we can estimate the proportion of base substitutions fixed by natural selection, α, [ 7 ] using the following formula: Alpha represents the proportion of substitutions driven by positive selection. Alpha can be equal to any number between -∞ and 1. Negative values of alpha are produced by sampling error or violations of the model, such as the segregation of slightly deleterious amino acid mutations. [ 8 ] Similar to above, our null hypothesis here is that α=0, and we expect D n / D s to equal P n / P s . [ 5 ] The neutrality index (NI) quantifies the direction and degree of departure from neutrality (where P n / P s and D n / D s ratios are equal). When assuming that silent mutations are neutral, a neutrality index greater than 1 (i.e. NI > 1) indicates negative selection is at work, resulting in an excess of amino acid polymorphism. This occurs because natural selection is favoring the purifying selection, and the weeding out of deleterious alleles. [ 9 ] Because silent mutations are neutral, a neutrality index lower than 1 (i.e. NI < 1) indicates an excess of nonsilent divergence, which occurs when positive selection is at work in the population. When positive selection is acting on the species, natural selection favors a specific phenotype over other phenotypes, and the favored phenotype begins to go to fixation in the species as the allele frequency for that phenotype increases. [ 10 ] To find the neutrality index, we can use the following equation: One drawback of performing a McDonald–Kreitman test is that the test is vulnerable to error, as with every other statistical test. Many factors can contribute to errors in estimates of the level of adaptive evolution, including presence of slightly deleterious mutations, variation of mutation rates across the genome, variation in coalescent histories across the genome, and changes in the effective population size. All these factors result in α being underestimated. [ 11 ] However, according to research done by Charlesworth (2008), [ 3 ] Andolfatto(2008), [ 12 ] and Eyre-Walker(2006), [ 8 ] none of these factors are significant enough to make scientists believe the McDonald–Kreitman test is unreliable, except for the presence of slightly deleterious mutations in species. In general, the McDonald-Kreitman test is often thought to be unreliable because of how significantly the test tends to underestimate the degree of adaptive evolution in the presence of slightly deleterious mutations. A slightly deleterious mutation can be defined as a mutation that negative selection acts on only very weakly so that its fate is determined by both selection and random genetic drift. [ 3 ] If slightly deleterious mutations are segregating in the population, then it becomes difficult to detect positive selection and the degree of positive selection is underestimated. Weakly deleterious mutations have a larger chance of contributing to polymorphism than strongly deleterious mutations, but still have low probabilities of fixation. This creates bias in the McDonald–Kreitman test's estimate of the degree of adaptive evolution, resulting in a dramatically lower estimate of α. By contrast, since strongly deleterious mutations contribute to neither polymorphism nor divergence, strongly deleterious mutations do not bias estimates of α. [ 13 ] The presence of slightly deleterious mutations is strongly linked to genes that have experienced the greatest reduction in effective population size. [ 14 ] This means that soon after a recent reduction in effective population size in a species has occurred, such as a bottleneck, we observe a larger presence of slightly deleterious mutations in the protein-coding regions. [ 15 ] We can make a direct connection with the increase in number of slightly deleterious mutations and the recent decrease in effective population size. [ 14 ] For more information on why population size affects the tendency of slightly deleterious mutations to increase in frequency, refer to the article Nearly neutral theory of molecular evolution . Additionally, as with every statistical test, there is always the chance of having type I error and type II error in the McDonald Kreitman test. With statistical tests, we must strive more try to avoid making type I errors, to avoid rejecting the null hypothesis, when it is in fact, true. [ 16 ] However, the McDonald Kreitman test is very vulnerable to type I error, because of the many factors that can lead to the accidental rejection of the true null hypothesis. Such factors include variation in recombination rate, nonequilibrium demography, small sample sizes, and in comparisons involving more recently diverged species. [ 14 ] All of these factors have the ability to influence the ability of the McDonald-Kreitman test to detect positive selection, as well as the level of positive selection acting on a species. This inability to correctly determine the level of positive selection acting on a species often leads to a false positive, and the incorrect rejection of the null hypothesis. While performing the McDonald–Kreitman test, scientists also have to avoid making too numerous type II errors. Otherwise, a test's results may be too flawed and its results will be termed useless. There continues to be more experimentation with the McDonald–Kreitman test and how to improve the accuracy of the test. The most important error to correct for is the error that α is severely underestimated in the presence of slightly deleterious mutations, as discussed in the previous section "Sources of Error with the McDonald-Kreitman Test." This possible adjustment of the McDonald–Kreitman test includes removing polymorphisms below a specific value from the data set to improve and increase the number of substitutions that occurred due to adaptive evolution. [ 3 ] To minimize the impact of slightly deleterious mutations, it has been proposed to exclude polymorphisms that are below a certain cutoff frequency, such as <8% or <5% (there is still much debate about what the best cutoff value should be). By not including polymorphisms under a certain frequency, you can reduce the bias created by slightly deleterious mutations, since less polymorphisms will be counted. This will drive the estimate of α up. Therefore, the degree of adaptive evolution estimated will not be so severely underestimated, deeming the McDonald–Kreitman test to be more reliable. [ 13 ] One adjustment necessary is to control for the type I error in the McDonald–Kreitman test, refer to the discussion of this in previous section "Sources of Error with the McDonald Kreitman Test." One method to avoid type I errors is to avoid using populations that have undergone a recent bottleneck, meaning they have recently undergone a recent decrease in effective population size. [ 14 ] To make the analysis as accurate as possible in the McDonald–Kreitman test, it is best to use large sample sizes, but there is still debate and how large "large" is. [ 16 ] Another method of controlling for type I error of the McDonald Kreitman test in applications to non-coding DNA , Peter Andolfatto suggests [ 12 ] establishing significance levels by coalescent simulation with recombination in genomewide scans for selection on noncoding DNA. By doing this, you will be able to improve the accuracy of your statistical test and avoid any false positive tests. [ 12 ] With all these possible ways to avoid making type I errors, scientists should cautiously choose which populations they are analyzing, to avoid analyzing populations that will lead to inaccurate results.
https://en.wikipedia.org/wiki/McDonald–Kreitman_test
The McFadyen–Stevens reaction is a chemical reaction best described as a base-catalyzed thermal decomposition of acylsulfonylhydrazides to aldehydes . [1] [2] Dudman et al. have developed an alternative hydrazide reagent. [3] The mechanism of the McFadyen–Stevens reaction is still under investigation. Two groups have independently proposed a heterolytic fragmentation mechanism. [4] [5] The mechanism involves the deprotonation of the acyl sulfonamide followed by a 1,2-hydride migration to give the alkoxide ( 3 ). The collapse of the alkoxide results in the fragmentation producing the desired aldehyde ( 4 ), nitrogen gas , and an aryl sulfinate ion ( 5 ). Martin et al. have proposed a different mechanism involving an acyl nitrene . [6]
https://en.wikipedia.org/wiki/McFadyen–Stevens_reaction
In microbiology , McFarland standards are used as a reference to adjust the turbidity of bacterial suspensions so that the number of bacteria will be within a given range to standardize microbial testing. An example of such testing is antibiotic susceptibility testing by measurement of minimum inhibitory concentration which is routinely used in medical microbiology and research. If a suspension used is too heavy or too dilute, an erroneous result (either falsely resistant or falsely susceptible) for any given antimicrobial agent could occur. Original McFarland standards were made by mixing specified amounts of barium chloride and sulfuric acid together. Mixing the two compounds forms a barium sulfate precipitate , which causes turbidity in the solution. A 0.5 McFarland standard is prepared by mixing 0.05 mL of 1.175% barium chloride dihydrate (BaCl 2 •2H 2 O), with 9.95 mL of 1% sulfuric acid (H 2 SO 4 ). [ 1 ] Now there are McFarland standards prepared from suspensions of latex particles, which lengthens the shelf life and stability of the suspensions. The standard can be compared visually to a suspension of bacteria in sterile saline or nutrient broth. If the bacterial suspension is too turbid, it can be diluted with more diluent. If the suspension is not turbid enough, more bacteria can be added. McFarland nephelometer standards:{2} *at wavelength of 600 nm McFarland latex standards from Hardy Diagnostics (2014-12-10), measured at the UCSF DeRisi Lab: Mcfarland Standards- http://www.dalynn.com/dyn/ck_assets/files/tech/TM53.pdf This standards - or measurement -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/McFarland_standards
The McGehee transformation was introduced by Richard McGehee to study the triple collision singularity in the n-body problem . The transformation blows up the single point in phase space where the collision occurs into a collision manifold , the phase space point is cut out and in its place a smooth manifold is pasted. This allows the phase space singularity to be studied in detail. What McGehee found was a distorted sphere with four horns pulled out to infinity and the points at their tips deleted. McGehee then went on to study the flow on the collision manifold.
https://en.wikipedia.org/wiki/McGehee_transformation
McIlvaine buffer is a buffer solution composed of citric acid and disodium hydrogen phosphate , also known as citrate-phosphate buffer . It was introduced in 1921 by the United States agronomist Theodore Clinton McIlvaine (1875–1959) from West Virginia University , and it can be prepared in pH 2.2 to 8 by mixing two stock solutions. [ 1 ] McIlvaine buffer can be used to prepare a water-soluble mounting medium when mixed 1:1 with glycerol. [ 2 ] Preparation of McIlvaine buffer requires disodium phosphate and citric acid . One liter of 0.2M stock solution of disodium hydrogen phosphate can be prepared by dissolving 28.38g of disodium phosphate in water, and adding a quantity of water sufficient to make one liter. One liter of 0.1M stock solution of citric acid can be prepared by dissolving 19.21g of citric acid in water, and adding a quantity of water sufficient to make one liter. From these stock solutions, McIlvaine buffer can be prepared in accordance with the following table:
https://en.wikipedia.org/wiki/McIlvaine_buffer
McIntosh and Fildes' anaerobic jar is an instrument used in the production of an anaerobic environment. This method of anaerobiosis as others is used to culture bacteria which die or fail to grow in presence of oxygen ( anaerobes ). [ 1 ] [ 2 ] It was originally introduced by James McIntosh, Paul Fildes and William Bulloch in 1916. [ 3 ] McIntosh and Fildes, after whom the device has been named, published an improved version in 1921. [ 4 ] The jar, about 20 by 12.5 inches (510 mm × 320 mm) is made of metal. Its parts are as follows:
https://en.wikipedia.org/wiki/McIntosh_and_Fildes'_anaerobic_jar
In mathematics , specifically in the field of group theory , the McKay conjecture is a theorem of equality between two numbers: the number of irreducible complex characters of degree not divisible by a prime number p {\displaystyle p} for a given finite group and the same number for the normalizer in that group of a Sylow p {\displaystyle p} -subgroup . It is named after the Canadian mathematician John McKay , who originally stated a limited version of it as a conjecture in 1971, for the special case of p = 2 {\displaystyle p=2} and simple groups . The conjecture was later generalized by other mathematicians to a more general conjecture for any prime value of p {\displaystyle p} and more general groups. In 2023, a proof of the general conjecture was announced by Britta Späth and Marc Cabanes. [ 1 ] Suppose p {\displaystyle p} is a prime number, G {\displaystyle G} is a finite group , and P ≤ G {\displaystyle P\leq G} is a Sylow p {\displaystyle p} -subgroup. Define where Irr ( G ) {\displaystyle {\textrm {Irr}}(G)} denotes the set of complex irreducible characters of the group G {\displaystyle G} . The McKay conjecture claims the equality where N G ( P ) {\displaystyle N_{G}(P)} is the normalizer of P {\displaystyle P} in G {\displaystyle G} . In other words, for any finite group G {\displaystyle G} , the number of its irreducible complex representations whose dimension is not divisible by p {\displaystyle p} equals that number for the normalizer of any of its Sylow p {\displaystyle p} -subgroups. (Here we count isomorphic representations as the same.) In McKay's original papers on the subject, [ 2 ] [ 3 ] the statement was given for the prime p = 2 {\displaystyle p=2} and simple groups, but examples of computations of | Irr p ′ ( G ) | {\displaystyle |{\textrm {Irr}}_{p'}(G)|} for odd primes or symmetric groups are mentioned. Marty Isaacs also checked the conjecture for the prime 2 and solvable groups G {\displaystyle G} . [ 4 ] The first appearance of the conjecture for arbitrary primes is in a paper by Jon L. Alperin giving also a version in block theory , now called the Alperin–McKay conjecture . [ 5 ] In 2007, Martin Isaacs , Gunter Malle and Gabriel Navarro showed that the McKay conjecture reduces to the checking of a so-called inductive McKay condition for each finite simple group . [ 6 ] [ 7 ] This opens the door to a proof of the conjecture by using the classification of finite simple groups . The Isaacs−Malle−Navarro paper was also an inspiration for similar reductions for Alperin weight conjecture (named after Jonathan Lazare Alperin ), its block version, the Alperin−McKay conjecture, and Dade's conjecture (named after Everett C. Dade ). The McKay conjecture for the prime 2 was proven by Britta Späth and Gunter Malle in 2016. [ 8 ] An important step in proving the inductive McKay condition for all simple groups is to determine the action of the automorphism group Aut ( G ) {\displaystyle {\textrm {Aut}}(G)} on the set Irr ( G ) {\displaystyle {\textrm {Irr}}(G)} for each finite quasisimple group G {\displaystyle G} . The solution has been announced by Späth [ 9 ] in the form of an Aut ( G ) {\displaystyle {\textrm {Aut}}(G)} -equivariant Jordan decomposition of characters for finite quasisimple groups of Lie type. A proof of the McKay conjecture for all primes and all finite groups was announced by Britta Späth and Marc Cabanes in October 2023 in various conferences, a manuscript being available later in 2024. [ 10 ]
https://en.wikipedia.org/wiki/McKay_conjecture
Condorcet methods Positional voting Cardinal voting Quota-remainder methods Approval-based committees Fractional social choice Semi-proportional representation By ballot type Pathological response Strategic voting Paradoxes of majority rule Positive results The McKelvey–Schofield chaos theorem is a result in social choice theory . It states that if preferences are defined over a multidimensional policy space, then choosing policies using majority rule is unstable. There will in most cases be no Condorcet winner and any policy can be enacted through a sequence of votes, regardless of the original policy. This means that adding more policies and changing the order of votes ("agenda manipulation") can be used to arbitrarily pick the winner. [ 1 ] Versions of the theorem have been proved for different types of preferences, with different classes of exceptions. A version of the theorem was first proved by Richard McKelvey in 1976, for preferences based on Euclidean distances in R n {\displaystyle \mathbb {R} ^{n}} . Another version of the theorem was proved by Norman Schofield in 1978, for differentiable preferences. The theorem can be thought of as showing that Arrow's impossibility theorem holds when preferences are restricted to be concave in R n {\displaystyle \mathbb {R} ^{n}} . The median voter theorem shows that when preferences are restricted to be single-peaked on the real line, Arrow's theorem does not hold, and the median voter's ideal point is a Condorcet winner. The chaos theorem shows that this good news does not continue in multiple dimensions. The theorem considers a finite number of voters, n , who vote for policies which are represented as points in Euclidean space of dimension m . Each vote is between two policies using majority rule . Each voter, i , has a utility function , U i , which measures how much they value different policies. [ clarification needed ] Richard McKelvey considered the case when preferences are "Euclidean metrics". [ 2 ] That means every voter's utility function has the form U i ( x ) = Φ i ⋅ d ( x , x i ) {\displaystyle U_{i}(x)=\Phi _{i}\cdot d(x,x_{i})} for all policies x and some x i , where d is the Euclidean distance and Φ i {\displaystyle \Phi _{i}} is a monotone decreasing function. Under these conditions, there could be a collection of policies which don't have a Condorcet winner using majority rule. This means that, given a number of policies X a , X b , X c , there could be a series of pairwise elections where: McKelvey proved that elections can be even more "chaotic" than that: If there is no equilibrium outcome [ clarification needed ] then any two policies, e.g. A and B , have a sequence of policies, X 1 , X 2 , . . . , X s {\displaystyle X_{1},X_{2},...,X_{s}} , where each one pairwise wins over the other in a series of elections, meaning: This is true regardless of whether A would beat B or vice versa . The simplest illustrating example is in two dimensions , with three voters. Each voter will then have a maximum preferred policy, and any other policy will have a corresponding circular indifference curve centered at the preferred policy. If a policy was proposed, then any policy in the intersection of two voters indifference curves would beat it. Any point in the plane will almost always have a set of points that are preferred by 2 out of 3 voters. Norman Schofield extended the theorem to more general classes of utility functions, requiring only that they are differentiable . He also established conditions for the existence of a directed continuous path of policies, where each policy further along the path would win against one earlier. [ 3 ] [ 1 ] Some of Schofield's proofs were later found to be incorrect by Jeffrey S. Banks, who corrected his proofs. [ 4 ] [ 5 ] This economic theory related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/McKelvey–Schofield_chaos_theorem
McKinsey & Company (informally McKinsey or McK ) is an American multinational strategy and management consulting firm that offers professional services to corporations, governments, and other organizations. Founded in 1926 by James O. McKinsey , McKinsey is the oldest and largest of the " MBB " management consultancies. The firm mainly focuses on the finances and operations of their clients. Under the direction of Marvin Bower , McKinsey expanded into Europe during the 1940s and 1950s. In the 1960s, McKinsey's Fred Gluck —along with Boston Consulting Group 's Bruce Henderson , Bill Bain at Bain & Company , and Harvard Business School 's Michael Porter —initiated a program designed to transform corporate culture . [ 5 ] [ 6 ] A 1975 publication by McKinsey's John L. Neuman introduced the business practice of "overhead value analysis" that contributed to a downsizing trend that eliminated many jobs in middle management . [ 7 ] [ 8 ] McKinsey has been the subject of significant controversy and is the subject of multiple criminal investigations into its business practices. The company has been criticized for its role promoting OxyContin use during the opioid crisis in North America , its work with Enron , and its work for authoritarian regimes like Saudi Arabia and Russia . [ 9 ] [ 10 ] [ 11 ] [ 12 ] The criminal investigation by the US Justice Department , with a grand jury to determine charges, is into its role in the opioid crisis and obstruction of justice related to its activities in the sector. [ 13 ] McKinsey works with some of the largest fossil fuel producing governments and companies, including to increase fossil fuel demand. [ 14 ] [ 15 ] McKinsey has a notoriously competitive hiring process, [ 16 ] [ 17 ] and is widely seen as one of the most selective employers in the world. [ 18 ] McKinsey recruits primarily from top-ranked business schools , [ 19 ] [ 20 ] [ 21 ] and was one of the first management consultancies to recruit a limited number of candidates with advanced academic degrees (e.g., PhD) as well as deep field expertise, particularly those who have demonstrated business acumen and analytical skills. [ 22 ] [ 23 ] McKinsey publishes a business magazine, the McKinsey Quarterly . McKinsey & Company was founded in Chicago under the name James O. McKinsey & Company in 1926 by James O. McKinsey , a professor of accounting at the University of Chicago . [ 24 ] [ 25 ] He conceived the idea after he had witnessed inefficiencies in military suppliers while he was working for the United States Army Ordnance Department . [ 26 ] : 4 The firm called itself an "accounting and management firm" and started out giving advice on using accounting principles as a management tool. [ 27 ] : 3 McKinsey's first partners were AT Kearney , hired in 1929, [ 28 ] [ a ] and Marvin Bower , hired in 1933. [ 30 ] [ 31 ] : 133 [ b ] Bower is credited with establishing McKinsey's values and principles in 1937, based on his experience as a lawyer. The firm developed an " up or out " policy, where consultants who are not promoted are asked to leave. In 1937, [ 32 ] [ 33 ] Bower established a set of rules: that consultants should put the interests of clients before McKinsey's revenues, not discuss client affairs, tell the truth even if it means challenging the client's opinion, and only perform work that is both necessary and that McKinsey can do well. [ 34 ] [ 33 ] Bower created the firm's principle of only working with CEOs, which was later expanded to CEOs of subsidiaries and divisions. He also created McKinsey's principle of only working with clients the firm felt would follow its advice. [ 35 ] [ 36 ] Bower also established the firm's language. [ 33 ] In 1932, the company opened its second office in New York City . [ 27 ] : 20 In 1935, McKinsey left the firm temporarily to be the chairman and CEO of client Marshall Field's . [ 26 ] : 5 [ 31 ] : 133 In 1935, McKinsey merged with accounting firm Scovell, Wellington & Company, creating the New York-based McKinsey, Wellington & Co. and splitting off the accounting practice into Chicago-based Wellington & Company. [ 26 ] : 5 A Wellington project that accounted for 55 percent of McKinsey, Wellington & Company's billings was about to expire [ 37 ] and Kearney and Bower had disagreements about how to run the firm. Bower wanted to expand nationally and hire young business school graduates, whereas Kearney wanted to stay in Chicago and hire experienced accountants. [ 31 ] : 134 In 1937, James O. McKinsey died after catching pneumonia. [ 27 ] [ 29 ] This led to the division of McKinsey, Wellington & Company in 1939. The accounting practice returned to Scovell, Wellington & Company, while the management engineering practice was split into McKinsey & Company and McKinsey, Kearney & Company. [ 28 ] [ 37 ] Bower had partnered with Guy Crockett from Scovell Wellington, who invested in the new McKinsey & Company and became managing partner, while Marvin Bower is credited with founding the firm's principles and strategy as his deputy. [ 37 ] [ 38 ] The New York office purchased exclusive rights to the McKinsey name from the former McKinsey Chicago office which was separated with AT Kearney in 1946. [ 39 ] : 25 McKinsey & Company grew quickly in the 1940s and 1950s, especially in Europe. [ 26 ] : 12–13 [ 39 ] : 25 [ 40 ] It had 88 staff in 1951 [ 41 ] and more than 200 by the 1960s, [ 39 ] including 37 in London by 1966. [ 41 ] In the same year, McKinsey had six offices in major US cities, including San Francisco , Cleveland , Los Angeles and Washington D.C. , as well as six abroad. These foreign offices were primarily in Europe , such as in London , Paris , and Amsterdam , as well as in Melbourne . [ 26 ] : 12–13 By this time, one third of the company's revenues originated from its European offices. [ 39 ] Guy Crockett stepped down as managing director in 1959, and Marvin Bower was elected in his place. [ 37 ] [ 35 ] : 61 McKinsey's profit-sharing, executive and planning committees were formed in 1951. [ 37 ] The organization's client base expanded especially among governments, defense contractors, blue-chip companies and military organizations in the post– World War II era. [ 32 ] McKinsey became a private corporation with shares owned exclusively by McKinsey employees in 1956. [ 26 ] : 12 [ 37 ] After Bower stepped down in 1967, the firm's revenues declined. [ 42 ] New competitors like the Boston Consulting Group and Bain & Company created increased competition for McKinsey by marketing specific branded products, such as the Growth–Share Matrix , and by selling their industry expertise. [ 40 ] [ 43 ] [ 34 ] In 1971, McKinsey created the Commission on Firm Aims and Goals, which found that McKinsey had become too focused on geographic expansion and lacked adequate industry knowledge. The commission advised that McKinsey slow its growth and develop industry specialties. [ 26 ] : 14 [ 40 ] In 1975, John L. Neuman, a McKinsey consultant at the time, published "Make Overhead Cuts That Last" in Harvard Business Review , [ 44 ] in which he introduced new rules for scientific management such as "overhead valuation analysis" (OVA). [ 7 ] : 65 OVA guided McKinsey's "path to downsizing", responding to the "mid-century corporation's excessive reliance on middle management". [ 8 ] Neuman wrote that the "process, though swift, is not painless. Since overhead expenses are typically 70% to 85% people-related and most savings come from work-force reductions, cutting overhead does demand some wrenching decisions." [ 44 ] In 1976, Ron Daniel was elected managing director, serving until 1988. [ 27 ] : 42 Daniel and Fred Gluck helped shift the firm away from its generalist approach by developing 15 specialized working groups within McKinsey called Centers of Competence and by developing practice areas called Strategy, Operations and Organization. Daniel also began McKinsey's knowledge management efforts in 1987. [ 26 ] : 15–17 This led to the creation of an IT system that tracked McKinsey engagements, a process to centralize knowledge from each practice area and a resource directory of internal experts." [ 26 ] : 6–7 By the end of his tenure in 1988 the firm was growing again and had opened new offices in Rome , Helsinki , São Paulo and Minneapolis . [ 26 ] : 15–17 [ 40 ] Fred Gluck was McKinsey's managing director from 1988 to 1994. [ 45 ] The firm's revenues doubled during his tenure. [ 34 ] He organized McKinsey into 72 "islands of activity" that were organized under seven sectors and seven functional areas. [ 26 ] : 18 By 1997, McKinsey had grown eightfold over its size in 1977. [ 33 ] In 1989 the firm tried to acquire talent in IT services through a $10 million purchase of the Information Consulting Group (ICG), but a culture clash caused 151 out of the 254 ICG staff members to leave by 1993. [ 34 ] [ 45 ] In 1994, Rajat Gupta became the first non-American-born partner to be elected as the firm's managing director. [ 46 ] By the end of his tenure, McKinsey had grown from 2,900 to 7,700 staff and 58 to 84 locations. [ 47 ] He opened new international offices in cities such as Moscow , Beijing and Bangkok . [ 26 ] : 20 Continuing the structure developed by prior directors, Gupta also created 16 industry groups charged with understanding specific markets and instituted a three-term limit for the managing director. [ 26 ] : 22 McKinsey created practice areas for manufacturing and business technology in the late 1990s. [ 26 ] : 21, 23 McKinsey set up "accelerators" in the 1990s, where the firm accepted stock -based reimbursement to help internet startups ; [ 47 ] [ 48 ] the company performed more than 1,000 e-commerce projects from 1998 to 2000 alone. [ 26 ] : 24 An October 1, 2000, article in the New York Times described the compulsory mini-courses that McKinsey—and its two largest rivals Boston Consulting and Bain—offered their "hyper-educated" young new recruits. Once completed, these newly certified management consultants would begin their work of "advising the executives of multibillion-dollar companies" on "projects" not related to their academic backgrounds"—"[l]awyers would help packaged-foods companies develop new products, and physicists would tell Internet start-ups how to stand out from the crowd." [ 49 ] The burst of the dot-com bubble led to a reduction in utilization rates of McKinsey's consultants from 64 to 52 percent. Though McKinsey avoided dismissing any personnel following the decline, [ 47 ] the decline in revenues and losses from equity-based payments as stock lost value, together with a recession in 2001, meant the company had to reduce its prices, cut expenses and reduce hiring. [ 26 ] : 25 In 2001, McKinsey launched several practices that focused on the public and social sector. It took on many public sector or non profit clients on a pro bono basis. [ 32 ] By 2002, McKinsey had invested a $35.8 million budget on knowledge management , up from $8.3 million in 1999. [ 32 ] : 1 Its revenues were 50, 20, and 30 percent from strategy, operations, and technology consulting, respectively. [ 26 ] : 20 In 2003, Ian Davis , the head of the London office, was elected to the position of managing director. [ 50 ] Davis promised a return to the company's core values after a period in which the firm had expanded rapidly, which some McKinsey consultants felt was a departure from the company's heritage. [ 51 ] Also in 2003, the firm established a headquarters for the Asia-Pacific region in Shanghai . By 2004, more than 60 percent of McKinsey's revenues were generated outside the U.S. [ 32 ] The company started a Social Sector Office (SSO) in 2008, which is divided into three practices: Global Public Health, Economic Development and Opportunity Creation (EDHOC) and Philanthropy. McKinsey does much of its pro-bono work through the SSO, whereas a Business Technology Office (BTO), founded in 1997, provides consulting on technology strategy. [ 52 ] [ 53 ] [ 54 ] By 2009, the firm consisted of 400 directors (senior partners), up from 151 in 1993. [ 34 ] [ 55 ] Dominic Barton was elected as managing director, a role he was re-elected for in 2012 and 2015. [ 55 ] Rajat Gupta along with another McKinsey executive, Anil Kumar , were among those convicted in a government investigation into insider trading for sharing inside information with Galleon Group hedge fund owner Raj Rajaratnam . [ 56 ] [ 57 ] Though McKinsey was not accused of any wrongdoing, the convictions were embarrassing for the firm, since it prides itself on integrity and client confidentiality. [ 58 ] [ 59 ] [ 60 ] McKinsey no longer maintains a relationship with either senior partner. [ 61 ] [ 62 ] Senior partner Anil Kumar, described as Gupta's protégé, [ 63 ] left the firm after the allegations in 2009, and pleaded guilty in January 2010. [ 64 ] [ 65 ] While he and other partners had been pitching McKinsey's consulting services to the Galleon Group, Kumar and Rajaratnam reached a private consulting agreement, violating McKinsey's policies on confidentiality. [ 66 ] Gupta was convicted in June 2012, of four counts of conspiracy and securities fraud , and acquitted on two counts. [ 67 ] In October 2011, he was arrested by the FBI on charges of sharing insider information from these confidential board meetings with Rajaratnam. [ 68 ] [ 69 ] At least twice, Gupta used a McKinsey phone to call Rajaratnam and retained other perks—an office, assistant, and $6 million retirement salary that year [ 70 ] —as a senior partner emeritus. [ 62 ] After the scandal McKinsey instituted new policies and procedures to discourage future indiscretions from consultants, [ 71 ] including investigating other partners' ties to Gupta. [ 72 ] [ 73 ] In February 2018, Kevin Sneader was elected as managing director. He had a three-year term that began on July 1, 2018. [ 74 ] McKinsey has consulted for multiple cities, states and government organizations during the 2019 coronavirus pandemic . During the first four months of the pandemic, McKinsey obtained in excess of $100 million in consulting work, including no-bid contracts with the United States Department of Veterans Affairs and Air Force . [ 75 ] The reopening guidelines for Florida's Miami-Dade County , produced with McKinsey's input, were criticized by local media and officials for complexity and lack of clarity. [ 75 ] McKinsey discontinued its investment banking advisory unit in 2021, citing "personnel matters" as the reason. [ 76 ] In 2021, McKinsey's Australian office made two acquisitions, Hypothesis, a digital product development company, and Venturetec, an innovation consulting firm. [ 77 ] [ 78 ] On June 1, 2022, McKinsey announced that it had acquired Caserta, a data engineering firm. [ 79 ] In March 2023, McKinsey announced a layoff of 1,400 employees, in a rare job cut of the company. [ 80 ] McKinsey & Company was originally organized as a partnership [ 81 ] before being legally restructured as a private corporation with shares owned by its partners in 1956. [ 37 ] [ 82 ] It mimics the structure of a partnership and senior employees are called "partners". [ 51 ] [ 81 ] The company has a flat hierarchy and each member is assigned a mentor. [ 83 ] : 65, 142 Since the 1960s, McKinsey's managing director has been elected by a vote of senior directors to work up to three, three-year terms or until reaching the mandatory retirement age of 60. [ 84 ] The firm is also managed by a series of committees that each has its own area of responsibility. [ 26 ] : 22 [ 33 ] : 6 By 2013, McKinsey was described as having a decentralized structure, whereby different offices operate similarly but independently. [ 35 ] [ 36 ] The company's budgeting is centralized, but individual consultants are given a large degree of autonomy. [ 85 ] As a global firm McKinsey does not have a traditional "headquarters"; the managing partner chooses his or her home office. [ 86 ] McKinsey & Company provides strategy and management consulting services, such as providing advice on an acquisition, developing a plan to restructure a sales force, creating a new business strategy or providing advice on downsizing, according to the 2013 book, The Firm . [ 35 ] [ 83 ] The 1999 book, The McKinsey Way said that McKinsey consultants designed and implemented studies to evaluate management decisions using data and interviews to test hypotheses . [ 83 ] which were then presented to senior management, typically in a PowerPoint presentation and a booklet. [ 83 ] McKinsey & Company has traditionally charged approximately 25 percent more than competing firms. [ 52 ] Its invoices traditionally contain only a single line. A typical McKinsey engagement (called a "study") can last between two and twelve months and involves three to six McKinsey consultants. [ 52 ] An engagement is usually managed by a generalist that covers the region the client's headquarters are located in and specialists that have either an industry or functional expertise. [ 34 ] Unlike some competing consulting firms, McKinsey does not hold a policy against working for multiple competing companies (although individual consultants are barred from doing so). McKinsey & Company was the first management consultancy to hire recent graduates instead of experienced business managers, when it started doing so in 1953. [ 87 ] According to a 1997 article in The Observer , McKinsey recruited recent graduates and "imbue[d] them with a religious conviction" in the firm, then cull[ed] through them with its " up-or-out " policy. [ 33 ] The "up or out" policy, which was established in 1951, meant that consultants that were not being promoted within the firm were asked to leave. [ 41 ] : 208 [ 88 ] By 1997, about one-fifth of McKinsey's consultants departed under the up or out policy each year. [ 33 ] [ 85 ] McKinsey's practice of hiring recent graduates and the "up-or-out" philosophy were originally based on Marvin Bower's experiences at the law firm Jones Day in the 1930s, as well as the " Cravath system " used at the law firm Cravath, Swaine and Moore . [ 41 ] : 206 In the early 21st century, it has consistently been recognized by Vault as the most prestigious consulting firm employer in the world. [ 89 ] In 2018, 800,000 candidates applied for 8,000 jobs. [ 90 ] While many recruits have MBAs, by 2009, less than half of the firm's recruits were business majors; [ 52 ] : 7 by 1999, recruits had advanced degrees in science, medicine, engineering or law. [ 83 ] [ 91 ] [ 92 ] A November 1, 1993, profile story in Fortune magazine said that McKinsey & Company was "the most well-known, most secretive, most high-priced, most prestigious, most consistently successful, most envied, most trusted, most disliked management consulting firm on earth". [ 34 ] In the article, McKinsey was cited as claiming that its consultants were not motivated by money, [ 34 ] and that partners talked to each other with "a sense of personal affection and admiration". [ 34 ] The article described a culture clash that occurred in the early 1990s, leading to the departure of 151 out of the 254 ICG staff members. [ 34 ] In their 1997 book Dangerous Company: Management Consultants and the Businesses They Save and Ruin , authors James O'Shea and Charles Madigan said that McKinsey's culture had often been compared to religion , because of the influence, loyalty and zeal of its members. [ 36 ] [ 93 ] The firm has a policy against discussing specific client situations. [ 36 ] A September 1997 The News Observer story said that McKinsey's internal culture was "collegiate and ruthlessly competitive" and has been described as arrogant. [ 33 ] Ethan Rasiel's 1999 book entitled The McKinsey Way , described a culture at McKinsey's whereby members were not supposed to "sell" their services. [ 83 ] [ 94 ] The Sunday Times wrote that McKinsey was a pioneer in the industry—the "first firm to hire MBA graduates from the top business schools to staff its projects, rather than relying on older industry personnel." They were still trying to keep a "very low profile public image" in 2005. [ 87 ] That year, an article in The Guardian said that McKinsey "hours are long, expectations high and failure not acceptable". [ 94 ] According to an October 2009 Reuters article, the firm had a "button-down culture" focused on "playing by the rules". [ 95 ] In his 2013 book, The Firm: The Story of McKinsey and Its Secret Influence on American Business , Duff McDonald described how McKinsey's consultants were expected to become a part of the community and recruit clients from church, charitable foundations, board positions and other community involvements. [ 35 ] McDonald wrote that McKinsey calls itself "The Firm" and its employees "members". [ 35 ] [ 34 ] BusinessWeek summarized The Firm ' s description of McKinsey as a "fading empire, where hubris and changing times have diminished the firm's statures." [ 96 ] In his February 2020 in-depth article in The Atlantic , Daniel Markovits argues that McKinsey promotes "intellect and elite credentials" and "Meritocrats" over "directly relevant experience". [ 8 ] Many of McKinsey's alumni become CEOs of major corporations or hold important government positions. [ 87 ] In doing so, they influence the other organizations with McKinsey's values and culture. [ 36 ] [ 87 ] [ 93 ] McKinsey's alumni have been appointed as CEOs or high-level executives. [ 93 ] [ 97 ] In his 2010 publication, The Lords of Strategy: The Secret Intellectual History of the New Corporate World , business journalist Walter Kiechel traced the roots of a profound change in corporate management to "four mavericks" in the 1960s—Fred Gluck at McKinsey & Company, Boston Consulting Group's Bruce Henderson, Bill Bain at Bain & Company, and Harvard Business School professor, Michael Porter. [ 5 ] Kiechel recounted how they "revolutionized the way we think about business, changed the very soul of the corporation, and transformed the way we work," according to the Harvard Business Press synopsis. [ 6 ] McKinsey has been either directly involved in, or closely associated with, a number of notable scandals, [ 98 ] involving Enron in 2001, [ 98 ] [ 99 ] Galleon in 2009, [ 56 ] Valeant in 2015, [ 100 ] Saudi Arabia in 2018, [ 101 ] China in 2018, [ 102 ] ICE in 2019, [ 103 ] an internal conflict of interest in 2019, [ 104 ] and Purdue Pharma in 2019, [ 105 ] among others. By 2019, major news outlets, including The New York Times and ProPublica , had raised concerns about McKinsey's business practices. [ 106 ] McKinsey & Company consultants regularly publish books, research and articles about business and management. [ 83 ] : 51 [ 107 ] : 55 The firm spends $50–$100 million a year on research. [ 107 ] : 54 McKinsey was one of the first organizations to fund management research, when it founded the Foundation for Management Research in 1955. [ 27 ] The firm began publishing a business magazine, The McKinsey Quarterly , in 1964. [ 108 ] It funds the McKinsey Global Institute, which studies global economic trends and was founded in 1990. [ 32 ] It also launched the McKinsey Institute for Black Economic Mobility in 2020 to fund research focused on advancing inclusive growth & racial equity globally. [ 109 ] Many consultants are contributors to the Harvard Business Review . [ 93 ] McKinsey consultants published only two books from 1960 to 1980, then more than 50 from 1980 to 1996. [ 107 ] : 55 McKinsey's publications and research give the firm a "quasi-academic" image. [ 107 ] A McKinsey book, In Search of Excellence , was published in 1982. [ 110 ] It featured eight characteristics of successful businesses based on an analysis of 43 top performing companies. [ 107 ] : 87–89 [ 110 ] : 348 It marked the beginning of McKinsey's shift from accounting to "softer" aspects of management, like skills and culture. [ 110 ] : 359 According to David Guest from King's College , In Search of Excellence became popular among business managers because it was easy to read, well-marketed and some of its core messages were valid. However, it was disliked by academics because of flaws in its methodology. Additionally, a 1984 analysis by BusinessWeek found that many of those companies identified as "excellent" in the book no longer met the criteria only two years later. [ 110 ] A 1997 article and a book it published in 2001 on "The War for Talent" [ 111 ] prompted academics and the business community to start focusing more on talent management. [ 112 ] : 163 The authors found that the best-performing companies were "obsessed" with acquiring and managing the best talent. [ 113 ] They advocated that companies rank employees by their performance and promote "stars", while targeting under-performers for improvement or layoffs. [ 113 ] [ 114 ] After the book was published, Enron , a company which followed many of its principles, was involved in a scandal that led to its bankruptcy . [ 113 ] In May 2001, a Stanford professor wrote a paper critical of the "War on Talent" arguing that it prioritized individuals at the expense of the larger organization. [ 111 ] McKinsey consultants published Creative Destruction in 2001. [ 35 ] : 247 The book suggested that CEOs need to be willing to change or rebuild a company, rather than protect what they have created. [ 115 ] It found that out of the first S&P 500 list from 1957, only 74 were still in business by 1998. [ 115 ] [ 116 ] The New York Times said it "makes a cogent argument that in times of rampant, uncertain change ... established companies are handcuffed by success." [ 117 ] In 2009, McKinsey consultants published The Alchemy of Growth , which established three "horizons" for growth: core enhancements, new growth platforms and options. [ 118 ] In February 2011, McKinsey surveyed 1,300 US private-sector employers on their expected response to the Affordable Care Act (ACA). [ 119 ] [ 120 ] Thirty percent of respondents said they anticipated they would probably or definitely stop offering employer sponsored health coverage after the ACA went into effect in 2014. [ 121 ] [ 122 ] These results, published in June 2011 in the McKinsey Quarterly , [ 119 ] became "a useful tool for critics of the ACA and a deep annoyance for defenders of the law" according to an article in Time magazine . [ 123 ] Supporters of healthcare reform argued the survey far surpassed estimates by the Congressional Budget Office and insisted that McKinsey disclose the survey's methodology. [ 124 ] [ 125 ] [ 126 ] Two weeks after publishing the survey results, [ 123 ] McKinsey released the contents of the survey including the questionnaire and 206 pages of survey data. [ 127 ] In its accompanying statement, [ 128 ] McKinsey said it was intended to capture the attitude of employers at a certain point in time, not make a prediction. [ 129 ] [ 130 ] Since 1990, McKinsey has been publishing Valuation: Measuring and Managing the Value of Companies , a textbook on valuation. [ 131 ] In 2022, McKinsey senior partners Carolyn Dewar, Scott Keller, and Vikram Malhotra authored the book CEO Excellence , which was published by Scribner . [ 132 ] Marginal abatement cost curves attempt to compare the financial costs of different options for reducing pollution in a region and are used in emissions trading, policy discussions and incentive programs. [ 133 ] McKinsey & Company released its first marginal abatement cost (MAC) curve for greenhouse gas emissions in February 2007, which was updated to version two in January 2009. [ 134 ] [ 135 ] McKinsey & Company's MAC curve has become the most widely used, [ 136 ] and is the basis for McKinsey's consulting on climate change and sustainability. [ 137 ] McKinsey's curve predicts negative cost abatement strategies, which has been controversial among economists. [ 138 ] The International Association for Energy Economics said in The Energy Journal that McKinsey's cost-curve was popular among policymakers, because it suggests they can take "bold action towards improving energy efficiency without imposing costs on society." [ 139 ] In a 2010 report, the Rainforest Foundation UK said McKinsey's cost curve methodology was misleading for policy decisions regarding the Reduced Emissions from Deforestation and Forest Degradation (REDD) program. The report argued that McKinsey's calculations exclude certain implementation and governance costs, which makes it favor industrial uses of forests while discouraging subsistence projects. [ 140 ] Greenpeace said the curve has allowed Indonesia and Guyana to win financial incentives from the United Nations by creating inflated estimates of current deforestation so they could demonstrate reductions in comparison. [ 141 ] [ 142 ] [ 143 ] McKinsey said they had made it clear in the cost-curve publications that cost curves do not translate "mechanically" into policy implications and that policymakers should consider "many other factors" before introducing new laws. [ 141 ] [ 142 ] In April 2022, in the report "Global Energy Perspective" the company predicted that fossil fuels use will peak between 2023–2025 and in 2050 fossil fuels will account for 43% of energy consumption. Emissions will peak in all scenarios before 2030. [ 144 ] However, the report of September 2024, said that fossil fuel consumption is expected to plateau between 2025 and 2035, and they will continue to play a major role accounting for 40%-60% of energy supply by 2050. Emissions will peak in 2025-2035. The report mention "a more challenging geopolitical landscape", rising electricity demand, especially from Artificial intelligence , some technical problems that are complicating the transition. The low cost of renewables can make them less profitable, so regulation can be required for adoption. [ 145 ] [ 146 ] The company also created a new report "Global Materials Perspective". It says that the energy transition require intensive mining of many materials, so even as the mining of coal will be reduced, the overall emissions from the mining and metal industry will decline only from 15% to 13% by 2035. [ 147 ] McKinsey & Company's founder, James O. McKinsey, introduced the concept of budget planning as a management framework in his fifth book Budgetary Control in 1922. [ 39 ] : 25 [ 148 ] : 422 The firm's first client was the treasurer of Armour & Company , who, along with other early McKinsey clients, had read Budgetary Control . In 1931 McKinsey created a methodology for analyzing a company called the General Survey Outline (GSO), which was established based on ideas introduced in the 1924 book Business Administration . It was also known as the Banker's Survey, because McKinsey's clients who used it in the 1930s were predominantly banks. [ 35 ] : 22 After the Wagner Act gave certain rights to employees to organize into unions in 1935, McKinsey started consulting corporations on employee relations. Later in the 1950s, the work of a McKinsey consultant on compensation was influential in "skyrocketing executive pay". [ 35 ] It also helped many companies such as Heinz , IBM , and Hoover expand into Europe. [ 35 ] In the 1940s, McKinsey helped many corporations convert into wartime production for World War II. [ 35 ] It also helped organize NASA into an organization that relies heavily on contractors in 1958. [ 149 ] : 105 McKinsey created a report in 1953 for Dwight D. Eisenhower that was used to guide government appointments. [ 150 ] In 1973, McKinsey & Company led a project for a consortium of grocery chains represented by the U.S. Supermarket Ad Hoc Committee on a Uniform Grocery Product Code to create the barcode. [ 151 ] [ 152 ] According to the book Business Research Methods , the barcode became commonplace after a study by McKinsey persuaded Kroger to adopt it. [ 153 ] In the 1970s and 1980s, McKinsey helped European companies change their organizational structure to M-form (Multidivisional Form), which organizes the company into semi-autonomous divisions that function around a product, industry or customer, rather than a function or expertise. [ 154 ] : 208 [ 155 ] : 110 In the 1980s, AT&T reduced investments in cell towers due to McKinsey's prediction that there would only be 900,000 cell phone subscribers by 2000. According to The Firm this was "laughably off the mark" from the 109 million cellular subscribers by 2000. At the time cell phones were bulky and expensive. [ 35 ] In 1999, McKinsey was hired by Sinochem to improve Sinochem's management and internal controls. [ 156 ] : 108–109 This was the first time central state-owned enterprise of China had hired a foreign consulting firm. [ 156 ] : 109 The firm helped the Dutch government facilitate a turnaround for Hoogovens , the world's largest steel company as of 2013, through a $1 billion bankruptcy bailout. It also implemented a turnaround for the city of Glasgow , which had problems with unemployment and crime. McKinsey created the corporate structure for NationsBank , when it was still a small company known as North Carolina National Bank . McKinsey was hired by General Motors to do a large-scale re-organization to help it compete with Japanese auto-makers. The book The Firm said it was an "unmitigated disaster" because McKinsey focused on corporate structure, whereas GM needed to compete with Japanese automakers through manufacturing process improvement. A McKinsey consultant said GM did not follow their advice. [ 35 ] A 2002 article in BusinessWeek said that a series of bankruptcies of McKinsey clients, such as Swissair , Kmart , and Global Crossing , in the 1990s raised questions as to whether McKinsey was responsible or had a lapse in judgement. [ 47 ] McKinsey recommended that Swissair avoid high operating costs in its home country by developing partnerships with airlines based in other regions. In order to attract partners, Swissair acquired more than $1 billion in shares of other airlines, many of which were failing. This led to huge losses and bankruptcy for Swissair. [ 157 ] As part of a lawsuit against Allstate , 13,000 McKinsey documents were released, showing that McKinsey recommended that Allstate reduce payouts to insurance claimants by offering low settlements, delaying processing to wear out claimants through attrition, and fighting customers that protest in court. Allstate's profits doubled over ten years after adopting McKinsey's strategy, but it also led to lawsuits alleging they were cheating claimants out of legitimate insurance claims. [ 158 ] [ 159 ] McKinsey works and has worked with many of the largest fossil fuel producing governments and companies. These include the largest oil company, Saudi Aramco , as well as Shell and BP . The company has stated that "fossil fuels will continue to be a part of the energy mix to meet the world's energy needs". [ 14 ] McKinsey internal forecasting in 2021 found that emissions from their clients would lead to 3 to 5 degrees of warming, that their client portfolio included "more than half of the world’s worst polluters" and that their work on sustainability projects was "being used to launder the Firm's reputation". [ 160 ] [ 14 ] McKinsey works on the Oil Demand Sustainability Programme (ODSP), the Saudi Arabia government program designed to increase demand for fossil fuels in poorer countries in Africa and Asia. [ 15 ] [ 14 ] McKinsey stopped working for US Immigration and Customs Enforcement (ICE) after it was disclosed that the firm had done more than $20 million in consulting work for the agency. McKinsey managing partner Kevin Sneader said the contract, not widely known within the company until The New York Times reported it, had "rightly raised" concerns. [ 161 ] In 2019, The New York Times and ProPublica reported on newly uncovered documents which showed that McKinsey, as part of its work with ICE, proposed cuts in spending on food and medical care for migrants. [ 103 ] McKinsey also advocated for an acceleration of the deportation process, causing concerns among ICE staff that the due process rights of the migrants would be violated. [ 103 ] Previously, McKinsey managing partner Kevin Sneader had claimed that McKinsey had done no work for ICE in terms of developing and implementing immigration policy; the uncovered documents showed that to be false. [ 103 ] [ 162 ] In October 2018, in the wake of the assassination of Jamal Khashoggi , a Saudi dissident and journalist, The New York Times reported that McKinsey had identified the most prominent Saudi dissidents on Twitter and that the Saudi government subsequently repressed the dissidents and their families. One of the dissidents, Khalid al-Alkami, was arrested. Another dissident identified by McKinsey; Omar Abdulaziz in Canada, had two brothers imprisoned by the Saudi authorities, and his cell phone hacked. McKinsey issued a statement, saying "We are horrified by the possibility, however remote, that [the report] could have been misused. We have seen no evidence to suggest that it was misused, but we are urgently investigating how and with whom the document was shared." [ 101 ] In December 2018, The New York Times reported that "the kingdom is a such a vital client for the firm—the source of nearly 600 projects from 2011 to 2016 alone—that McKinsey chose to participate in a major Saudi investment conference in October 2018 even after the killing and dismemberment of a Washington Post columnist by Saudi agents." [ 102 ] On February 12, 2019, the European Parliament Greens/EFA group presented a motion for a resolution on the situation on women's rights defenders in Saudi Arabia denouncing the involvement of foreign public relations companies in representing Saudi Arabia and handling its public image, particularly McKinsey & Company. [ 163 ] In February 2024, McKinsey was questioned in court about possible violations of federal disclosure rules . The company, along with three other consulting firms, was accused of refusing to provide information about their work for Saudi Arabia's Public Investment Fund and failing to disclose themselves as agents of the Saudi government. [ 164 ] Representatives of the firms warned that their staff could face jail if they complied with the subpoenas, breaking strict rules imposed by Saudi Arabia on what information consulting firms can share with the US government. [ 165 ] [ 166 ] [ 167 ] McKinsey's business and policy support for authoritarian regimes came under scrutiny in December 2018, in the wake of a lavish company retreat in China held adjacent to prisons where thousands of Uyghurs were being detained without cause . [ 102 ] [ 168 ] In December 2021, NBC News reported McKinsey's connection to a manufacturing facility owned by DJI , a drone maker sanctioned by the United States Department of the Treasury for alleged complicity in aiding the persecution of Uyghurs in China . [ 169 ] In the preceding few years, McKinsey's clients included Saudi Arabia 's absolute monarchy, [ 101 ] [ 170 ] [ 171 ] Turkey's autocratic leader Recep Tayyip Erdogan , ousted former president of Ukraine Viktor Yanukovych , and several Chinese and Russian companies under sanctions. [ 102 ] In 2015, McKinsey's think tank, the Urban China Initiative, advised the Chinese government on its 13th five-year plan and its Made in China 2025 policy. As part of a project for China's National Development and Reform Commission , McKinsey's think tank advised the Chinese government to "deepen co-operation between business and the military and push foreign companies out of sensitive industries," according to a 2024 Financial Times report . [ 172 ] In response, US legislators Marco Rubio and Michael McCaul stated that McKinsey had undermined US security and called for it to be banned from securing US federal government contracts. [ 173 ] In October 2024, several US lawmakers called on the United States Department of Justice to investigate whether McKinsey misrepresented its work with Chinese government entities, including state-owned enterprises . [ 174 ] On October 18, 2024, the US House of Representatives Select Committee on the CCP reported that "McKinsey Equipped America's Foremost Adversary and Misrepresented Work for the Chinese Military Under Oath". [ 175 ] McKinsey is reported to have provided consulting services for the Russian state-owned enterprise Rostec , which is responsible for manufacturing missile engines used during Russia's war on Ukraine . [ 176 ] According to January 2023 reporting from Die Zeit , McKinsey consultants would provide consulting services to Gazprom and Rostec while in Germany on behalf of the German Federal Ministry of Defence . [ 177 ] [ 178 ] According to US Senator Maggie Hassan McKinsey has displayed a "pattern of behavior" that raised "grave concerns about conflicts of interest". [ 176 ] McKinsey has also done work for Sberbank , VTB bank , Gazprom, and Rosneft , which are all closely tied to the Kremlin. [ 176 ] The firm has been associated with a number of notable scandals, including the collapse of Enron in 2001, [ 98 ] the 2008 financial crisis , [ 98 ] and facilitating state capture in South Africa. [ 179 ] It has also drawn controversy for involvement with Purdue Pharma , [ 180 ] US Immigration and Customs Enforcement , [ 103 ] and authoritarian regimes . [ 102 ] [ 101 ] Michael Forsythe and Walt Bogdanich , reporters for The New York Times , wrote a book entitled When McKinsey Comes to Town about the controversially unethical work history of the company. [ 181 ] [ 182 ] Enron was the creation of Jeff Skilling , a McKinsey consultant of 21 years, who was jailed after Enron reportedly used McKinsey on 20 different projects, [ 99 ] and McKinsey consultants had "used Enron as their sandbox." [ 99 ] Prior to the Enron scandal, McKinsey helped it shift from an oil and gas production company into an electric commodities trader, which led to significant growth in profits and revenues. [ 85 ] According to The Independent , there was "no suggestion that McKinsey was complicit in the subsequent scandal, [but] critics say the arrogance of Enron's leaders is emblematic of the McKinsey culture." [ 183 ] The government did not investigate McKinsey, who said they did not provide advice on Enron's accounting. [ 68 ] The Wall Street Journal questioned McKinsey's "liability" and its "close relationship with Enron", [ 184 ] and a 2002 BusinessWeek article suggested that they had ignored warning signs. [ 47 ] In his July 2002 in-depth BusinessWeek article on the aftermath of the Enron scandal, John Bryne wrote that McKinsey had been a "key architect of the strategic thinking that made Enron a Wall Street darling. In books, articles, and essays, its partners regularly stamped their imprimatur on many of Enron's strategies and practices, helping to position the energy giant as a corporate innovator worthy of emulation. The firm may not be the subject of any investigations, but its close involvement with Enron raises the question of whether McKinsey, like some other professional firms, ignored warning flags in order to keep an important account." [ 47 ] BusinessWeek described how McKinsey's culture had changed, as the "number of partners grew from 427 to 891" making it a "less personal place". [ 47 ] According to the article, "some current and former McKinsey consultants" said that McKinsey had lost their "ingrained values" that used to guide the firm. Citing the example of the dot-com bubble, McKinsey had begun to have "less prestigious companies" as clients and had allowed "its focus on building agenda-shaping relationships with top management at leading companies to slip." [ 47 ] Additionally, "there was a noticeable tilt toward bringing in revenue at the expense of developing knowledge." [ 47 ] McKinsey denied this. [ 47 ] McKinsey also denied giving Enron advice on financing issues or having suspicions that Enron was using improper accounting methods. [ 99 ] McKinsey is said to have played a significant role in the 2008 financial crisis by promoting the securitization of mortgage assets and encouraged the banks to fund their balance sheets with debt, driving up risk, which "poisoned the global financial system and precipitated the 2008 credit meltdown". [ 98 ] Furthermore, McKinsey advised Allstate Insurance to purposely give low offers to claimants. The Huffington Post revealed that the strategy was to make claims "so expensive and so time-consuming that lawyers would start refusing to help clients." [ 185 ] Next to this, 2016 McKinsey partner Navdeep Arora was convicted for illegally depleting State Farm of over $500,000 over a period of 8 years, in collaboration with a State Farm employee. [ 186 ] Valeant , a Canadian pharmaceutical company investigated by the SEC in 2015, has been accused of improper accounting, and that it used predatory price hikes to boost growth. [ 187 ] The Financial Times states that "Valeant's downfall is not exactly McKinsey's fault but its fingerprints are everywhere." [ 100 ] Three out of six senior executives were recent ex-McKinsey employees, as well as the chair of the 'talent and compensation' committee. [ 100 ] MIO Partners was a private investor of Valeant and McKinsey consulted Valeant on drug prices and acquisitions. [ 188 ] McKinsey advised opioid makers on how to "turbocharge" sales of OxyContin , proposed strategies to counter the emotional messages from mothers with teenagers who overdosed on OxyContin, and helped opioid makers circumvent regulation. [ 105 ] The firm also advised Purdue Pharma to offer pharmacies rebates based on the number of overdoses and addictions they caused. [ 180 ] In 2019, McKinsey projected that over 2,400 CVS customers would have an overdose or become reliant on opioids. [ 189 ] McKinsey estimated that a rebate of $14,810 per "event" would mean that Purdue would have to pay CVS $36.8 million that year. [ 190 ] In February 2021, McKinsey reached agreements with attorneys general in 49 states, five U.S. territories, and the District of Columbia. Across the settlements, the firm agreed to pay nearly $600 million to settle investigations into its role in promoting sales of OxyContin. [ 191 ] McKinsey has since apologized for its advice to opioid makers. [ 192 ] Records show that McKinsey worked for Purdue Pharma and other opioid makers in a 15-year period, from 2004 to 2019. [ 193 ] During 2018 and 2019, McKinsey collected at least $400 million consulting pharmaceutical companies. McKinsey advised Mallinckrodt, the largest manufacturer of generic opioids, as well as Endo International for which McKinsey consulted on marketing Opana . [ 194 ] McKinsey's consultation grew Endo into a leading generics manufacturer. McKinsey recommended targeted and influenced doctors who treat back pain in elderly and long-term care patients. [ 195 ] [ failed verification ] In February 2021, McKinsey paid $600 million to settle investigations into its role in promoting sales of OxyContin and fueling the greater opioid epidemic . [ 191 ] In April 2022, the New York Times reported that McKinsey had frequently allowed partners and other consultants to work for both government clients, such as the FDA , and pharmaceutical clients, such as Purdue. [ 196 ] These actions violated McKinsey's own internal ethical guidelines. [ 196 ] In December 2023, Reuters reported that McKinsey had agreed to pay an additional $78 million to settle claims with health insurers McKinsey's consulting helped to fuel "an epidemic of opioid addiction through its work for drug companies". [ 197 ] Reuters reported the settlement would be the last in a series and that McKinsey "admitted to no wrongdoing". [ 197 ] In 2024, the company became the subject of a criminal investigation by the US Justice Department into its role in advising opioid manufacturers how to boost sales. A grand jury was convened to determine charges to be brought against the firm. [ 198 ] [ 13 ] It is also under investigation for obstruction of justice during the period that concerns were mounting about their activities. [ 13 ] The firm settled the investigation in December 2024, for a sum of $650 million and with conditions that it cannot market controlled substances for a period of five years. [ 199 ] [ 200 ] This agreement was filed in federal court in Abingdon, Virginia, with aims to resolve criminal charges which were brought up as part of the latest corporate prosecution concerning the marketing of addictive painkillers. [ 201 ] New York City paid McKinsey $27.5 million between 2014 and 2017 to reduce prison assaults in Rikers Island ; but the violence grew and the city abandoned many of the firm's recommendations. The consultancy's alleged failings included not soliciting the views of inmates or clinic staff; using the encrypted messaging app Wickr that deletes messages, allegedly to avoid transparency; initiatives involving the expanded use of Tasers, shotguns and K9 patrol dogs; replacing troublesome inmates with more accommodating ones in the test area, which skewed the data in favor of the project; the use of ineffective data-analytics software; and spreadsheet errors that inflated the baseline rate of violence, against which the project was measured. [ 202 ] McKinsey advised New York City's Rikers Island jail complex and tested an anti-violence strategy named "Restart" which occurred in Rikers housing units. [ 203 ] Jail administrators reported that the strategy resulted in violent crimes dropping more than 70% inside the Rikers housing units. [ 204 ] Later, it was found that McKinsey consultants and jail officials rigged the program by grouping compliant inmates into the housing units, and that violent crimes including "slashings and stabbings" increased over 1000% from 2011 to 2016. [ 205 ] [ 206 ] In February 2019, The New York Times ran a series of articles about McKinsey [ 207 ] and the in-house hedge fund it operates – McKinsey Investment Office, or MIO Partners. The articles claimed that there was "potential for undisclosed conflicts of interest between the fund's investments and the advice the firm sells to clients", since the hedge fund could benefit from the inside knowledge obtained through management consulting services. [ 104 ] The firm responded that "MIO and McKinsey employ separate staffs. MIO staff have no nonpublic knowledge of McKinsey clients. For the vast majority of assets under management, decisions about specific investments are made by third-party managers." [ 104 ] In 2019, McKinsey paid the Justice Department $15 million from fees earned to settle allegations relating to failure to disclose potential conflict in three bankruptcy cases that the firm had advised. [ 208 ] [ 209 ] In 2021, MIO Partners, an affiliate of McKinsey & Co. that invests almost $31 billion of money on behalf of its employees, was fined $18 million by the US regulator, SEC . The SEC said some of the same people making investment decisions for MIO Partners were McKinsey & Co. employees who had visibility into confidential information for companies for which McKinsey was consulting. [ 210 ] The SEC claimed that MIO Partners had advanced knowledge of upcoming mergers, bankruptcy, and financial results announcements for companies that the firm was consulting. [ 211 ] In January 2022, the Second U.S. Circuit Court of Appeals in Manhattan revived a lawsuit against McKinsey & Co. filed by retired turnaround specialist Jay Alix, accusing the consulting firm of concealing potential conflicts when seeking permission from bankruptcy courts to perform lucrative work on corporate restructurings. [ 212 ] In July 2023, former Prima Wawona CEO Dan Gerawan filed a lawsuit alleging that the investment firm Paine Schwartz used Prima Wawona, then America's largest stone fruit producer, to create financial gain for McKinsey [ 213 ] [ 214 ] and that many of the Paine Schwartz employees were former McKinsey employees. [ 215 ] The lawsuit alleges that in late 2020, Paine Schwartz hired McKinsey as consultants, without board approval, to make massive changes in Prima Wawona's operations. Thereafter, the company's performance deteriorated quickly, according to the suit. [ 216 ] Moody's downgraded the company's outlook from "stable" to "negative". [ 217 ] [ 218 ] In October 2023, Prima Wawona filed for bankruptcy. [ 213 ] [ 219 ] McKinsey was the company's largest creditor, owed $8 million. [ 220 ] The company was in default on $679 million in debt. [ 214 ] Efforts to sell the company in bankruptcy failed. In January 2024, the company announced that it would liquidate, lay off all 5,400 employees, and sell off more than 13,000 acres of farmland. [ 221 ] [ 222 ] The Gupta family (no relation to Rajat Gupta) had strategically placed corrupted individuals in various South African government, utilities and infrastructure sectors. It is alleged that McKinsey was complicit in this corruption by using the Guptas to obtain consulting contracts from certain state-owned enterprises, including Eskom and Transnet . [ 223 ] Working with Trillian Capital Partners (a consultancy which was owned by a Gupta associate), [ 224 ] they provided services to the value of R1 billion ($75 million) annually. Trillian was paid a commission for facilitating the business for McKinsey. [ 225 ] McKinsey hired law firm Norton Rose Fulbright to carry out an internal investigation over the allegations. McKinsey's then Managing Partner, Dominic Barton, issued a statement following an internal investigation, in which the firm "admitted that it found violations of its professional standards but denied any acts of bribery, corruption, and payments to Trillian." [ 226 ] Corruption Watch , a South African non-governmental organization, filed a complaint about the controversial contract to the US Department of Justice , alleging that there was a criminal conspiracy between McKinsey, Trillian and Eskom in contravention of US and South African law. [ 227 ] It was revealed in January 2018 that criminal complaints were filed against McKinsey & Company by the South African Companies and Intellectual Property Commission . South African prosecutors confirmed that they would enforce the seizing of assets from McKinsey. [ 228 ] South Africa's National Prosecuting Authority concluded in early 2018 that the payments to McKinsey and its local business partner, Trillian, were illegal, involving crimes such as fraud, theft, corruption and money laundering. McKinsey had subsequently been in discussion with Eskom and the National Prosecuting Authority's Asset Forfeiture Unit to agree on a transparent, legally appropriate process for returning the R 1 billion (US$74M) it had been paid – it was confirmed on 6 July 2018 that this had been concluded. [ 229 ] Eskom confirmed it received R99.5M in interest from McKinsey on July 23, 2018. The interest payment covers the two years since McKinsey was paid almost R 1bn in 2016. [ 230 ] Information relating to allegedly corrupt practices by McKinsey at Transnet in 2011 and 2012 came to light in late July 2018. The weekly Mail & Guardian newspaper reported that a "new forensic treasury report shows how controversial former Transnet and Eskom chief financial officer Anoj Singh enjoyed overseas trips at the expense of international consulting firm McKinsey, which scored multi-billion rand contracts at the state owned entities." The "report reiterates treasury's recommendations that Singh's conduct with regards to McKinsey should be referred to the elite crime-fighting unit, the Hawks, for investigations under the Prevention and Combating of Corrupt Activities Act (Precca). Under Precca, Singh would be investigated for allegations of corruption as payment for the overseas trips alone would constitute a form of gratification, which is illegal." [ 231 ] The Sunday City Press reported that the forensic report in turn reported that "multinational advisory firm McKinsey paid for Singh to go on lavish international trips to Dubai, Russia, Germany and the UK, after which their contract with Transnet was massively extended." [ 232 ] McKinsey issued a statement that the allegations were incorrect. McKinsey stated that "based on an extensive review encompassing interviews, email records and expense documents, our understanding is that McKinsey did not pay for Mr. Singh's airfare and hotel lodgings in connection with the CFO Forum and the meetings that took place around the CFO Forum in London and elsewhere in 2012 and 2013." [ 233 ] On 11 October 2019, the United States Treasury department announced that it had imposed wide-ranging financial sanctions on three Gupta brothers, Ajay, Atul and Rajesh (aka Tony) and their business associate Salim Essa under the United States Magnitsky Act . [ 234 ] The Economist reported in November 2019, that McKinsey's scandals, such as the 2016 South Africa scandal and the allegations of conflict of interest tied to its $12.7bn investment affiliate, McKinsey Investment Office (MIO), are relatively recent in terms of its long history. [ 235 ] The article said that McKinsey's legal challenges facing McKinsey's new global managing partner, Kevin Sneader, may be related to the company's fast-paced growth with an increase of 2,200 partners compared to 2009. During that same time period, the number of employees increased to 30,000 worldwide from 17,000. [ 235 ] In 2020, McKinsey representatives giving testimony to the Zondo Commission of Inquiry into State Capture placed blame for the firm's involvement in the corruption scandal on former McKinsey partner, Vikas Sagar. [ 236 ] During 2021 McKinsey & Co. agreed to repay R 870 million (US$63M) in fees to South African state logistics company Transnet SOC Ltd., seeking to distance itself from contracts linked to corruption allegations. [ 237 ] In April 2022 the Zondo Commission recommended that key Eskom executives be criminally investigated for improperly awarding consulting contracts to McKinsey & Company. [ 238 ] South Africa's National Prosecuting Authority announced on Friday, 30 September 2022 that it had criminally charged both McKinsey South Africa and former McKinsey partner, Vikas Sagar, with fraud, corruption and theft related to a contract to advise Transnet on buying new locomotives. [ 239 ] In 2024, McKinsey was ordered to pay a $122 million criminal penalty (and enter into a three-year deferred prosecution agreement ) to settle an investigation by the Justice Department and South Africa's National Prosecuting Authority for violations of the Foreign Corrupt Practices Act (FCPA); 50% of the penalty will be paid to South Africa (roughly R1.1 billion). McKinsey paid bribes to government officials in South Africa between 2012 and 2016. [ 240 ] [ 241 ] In December 2022, it was reported that the French National Financial Prosecutors' Office had raided the headquarters of President Emmanuel Macron 's Renaissance party and McKinsey's Paris office. [ 242 ] The raids were related to probes into false election campaign accounting, as well as possible favouritism and conspiracy. [ 243 ] The probe had been widened in October 2022 from an initial focus on McKinsey's taxes to include alleged underreporting of campaign consulting costs and allegations of favoritism. [ 244 ] [ 245 ] Even though the company had a turn over of €329 million, [ 246 ] they did not pay any corporation taxes in France in a decade, according to the French senate . [ 247 ] [ 248 ] [ 249 ] McKinsey consultants are alleged to have worked as unpaid volunteers on Macron's 2017 and 2022 election campaigns , in violation of French law. [ 244 ] The firm is subsequently alleged to have benefitted from special access and favorable government treatment, including the awarding of lucrative government contracts. [ 243 ] The French media has dubbed the scandal 'McKinsey Affair' [ 250 ] or 'McKinseygate'. [ 243 ] McKinsey is facing possible charges for corruption and tax fraud as a result of the investigation. [ 251 ] In July 2023, the case was still pending. [ 252 ] A January 2023 investigative report by CBC News revealed that Justin Trudeau 's government had spent at least $117.4 million on McKinsey consulting since coming to power, compared to $2.2 million spent by the prior government. [ 253 ] [ 254 ] [ 255 ] [ 256 ] [ 257 ] According to documents obtained by Radio-Canada, all of those contracts were sole-source, meaning other firms were not given the chance to bid for the contracts. [ 255 ] Further investigative reporting identified at least $84 million in McKinsey consulting expenses between March 2021 and November 2022 alone. [ 258 ] [ 259 ] According to anonymous sources with major roles at Immigration, Refugees and Citizenship Canada (IRCC), McKinsey is reported to have a particularly large and growing influence over Canadian immigration policy. [ 255 ] [ 256 ] Policy is reported to have been decided on without input from public servants, and with minimal consideration for the public interest. [ 256 ] Canada's immigration targets have closely followed goals set in a plan by previous McKinsey head Dominic Barton , who outlined these plans in his 2016 report of the Advisory Council on Economic Growth and through his work with the Century Initiative . [ 255 ] Both the report and the Century Initiative advocate for a steep increase in immigration to bring Canada's population to 100 million by 2100. [ 256 ] According to one of the IRCC whistleblowers, the department was informed that Barton's report was a "foundational plan" in spite of reservations expressed by the then-immigration minister, John McCallum . [ 255 ] On January 10, 2023, Canadian opposition parties, including the Conservative Party of Canada , the New Democratic Party of Canada , and Bloc Quebecois , called for a parliamentary inquiry into federal contracts awarded to McKinsey. [ 259 ] [ 260 ] [ 261 ] [ 262 ] The opposition is demanding that the government disclose "contracts, conversations, records of work done, meetings held, text messages, email exchanges, everything that the government has with the company since taking office". [ 263 ] McKinsey has thus far refused to answer CBC News questions regarding its role and agreements with the federal government, while the government has refused to provide copies of the company's reports. [ 256 ] In response to the controversy, McKinsey issued a statement on its website indicating that it "welcomes the opportunity" to provide information to parliament, and that it "does not make policy recommendations on immigration or any other topic". [ 261 ] Trudeau asked fellow Liberal Party members Treasury Board President Mona Fortier and Procurement Minister Helena Jaczek to review the contracts return a final report. On 23 March 2023, the Treasury Board announced that audits had determined that departments did not consistently follow certain administrative rules and procedures, but that there was "broad compliance with values and ethics commitments." [ 254 ] According to the Treasury Board, while the audits raised questions about fairness, transparency and conflicts of interest, no evidence was found of political direction in awarding the contracts. [ 253 ] In 2021, more than 1,100 McKinsey employees signed a letter calling out the firm for working with 43 of the 100 most polluting companies. According to The New York Times , these 43 clients alone were responsible for more than a third of the world's carbon emissions in 2018. The letter called on the firm to disclose how much carbon its clients emit. [ 264 ] McKinsey executives said they would continue to advise major carbon polluters. Several employees resigned from the firm following the letter. [ 265 ] [ 266 ] In April 2022, McKinsey, Alphabet Inc. , Shopify , Meta Platforms , and Stripe, Inc. announced a $925 million advance market commitment of carbon dioxide removal (CDR) from companies that are developing CDR technology over the next 9 years. [ 267 ] [ 268 ] More than 400 civil society groups signed a letter of protest to Kenyan President William Ruto , accusing McKinsey of influencing the 2023 Africa Climate Summit. [ 269 ] The letter claimed that the agenda for the summit, as proposed by McKinsey, "reflects the interests of the US, McKinsey and the Western corporations they represent." [ 270 ] Leaked documents revealed that the consulting firm tried to push controversial carbon market schemes that would benefit its fossil fuel clients. [ 271 ] In 2023, an AFP investigation revealed in multiple leaked documents that McKinsey was using its position as primary advisor to COP28 hosts, the United Arab Emirates , to push the interest of its oil and gas clients ( ExxonMobil and Aramco ). [ 271 ] McKinsey has been accused of putting its own interests ahead of the climate by sources involved in preparatory meetings for COP28. McKinsey's energy scenario for the COP28 presidency would allow for continued investment in fossil fuels, which would undermine the goals of the Paris Agreement ; an "energy transition narrative" recommends oil use to be reduced by only 50% by 2050, and that trillions of dollars should continue to be invested in high-emission assets each year to at least 2050. [ 272 ] McKinsey & Company published several reports on business benefits of diversity, equity, and inclusion . [ 273 ] The reports were criticized as insufficiently investigating the difference between correlation and causality , lacking robustness, and overgeneralizing to broad claims. [ 274 ] In January 2025, McKinsey Partner Rachel Riley joined the Department of Government Efficiency (DOGE), an organization headed by Elon Musk . [ 275 ] [ 276 ] Media related to McKinsey & Company at Wikimedia Commons
https://en.wikipedia.org/wiki/McKinsey_&_Company
The McLafferty rearrangement is a reaction observed in mass spectrometry during the fragmentation or dissociation of organic molecules. It is sometimes found that a molecule containing a keto-group undergoes β-cleavage, with the gain of the γ-hydrogen atom, as first reported by Anthony Nicholson working in the Division of Chemical Physics at the CSIRO in Australia. [ 1 ] This rearrangement may take place by a radical or ionic mechanism. A description of the reaction was later published by the American chemist Fred McLafferty in 1959 leading to his name being associated with the process. [ 2 ] [ 3 ] [ 4 ]
https://en.wikipedia.org/wiki/McLafferty_rearrangement
The McMurry reaction is an organic reaction in which two ketone or aldehyde groups are coupled to form an alkene using a titanium chloride compound such as titanium(III) chloride and a reducing agent . The reaction is named after its co-discoverer, John E. McMurry . The McMurry reaction originally involved the use of a mixture TiCl 3 and LiAlH 4 , which produces the active reagents. Related species have been developed involving the combination of TiCl 3 or TiCl 4 with various other reducing agents, including potassium , zinc , and magnesium . [ 1 ] [ 2 ] This reaction is related to the Pinacol coupling reaction which also proceeds by reductive coupling of carbonyl compounds. This reductive coupling can be viewed as involving two steps. First is the formation of a pinacolate (1,2- diolate ) complex, a step which is equivalent to the pinacol coupling reaction . The second step is the deoxygenation of the pinacolate, which yields the alkene , this second step exploits the oxophilicity of titanium. Several mechanisms have been discussed for this reaction. [ 3 ] Low-valent titanium species induce coupling of the carbonyls by single electron transfer to the carbonyl groups. The required low-valent titanium species are generated via reduction , usually with zinc powder. This reaction is often performed in THF because it solubilizes intermediate complexes, facilitates the electron transfer steps, and is not reduced under the reaction conditions. The nature of low-valent titanium species formed is varied as the products formed by reduction of the precursor titanium halide complex will naturally depend upon both the solvent (most commonly THF or DME) and the reducing agent employed: typically, lithium aluminum hydride, zinc-copper couple, zinc dust, magnesium-mercury amalgam, magnesium, or alkali metals. [ 4 ] Bogdanovic and Bolte identified the nature and mode of action of the active species in some classical McMurry systems, [ 5 ] and an overview of proposed reaction mechanisms has been published. [ 3 ] It is of note that titanium dioxide is not generally a product of the coupling reaction. Although it is true that titanium dioxide is usually the eventual fate of titanium used in these reactions, it is generally formed upon the aqueous workup of the reaction mixture. [ 4 ] The original publication by Mukaiyama demonstrated reductive coupling of ketones using reduced titanium reagents. [ 6 ] McMurry and Fleming coupled retinal to give carotene using a mixture of titanium trichloride and lithium aluminium hydride . Other symmetrical alkenes were prepared similarly, e.g. from dihydrocivetone , adamantanone and benzophenone (the latter yielding tetraphenylethylene). A McMurry reaction using titanium tetrachloride and zinc is employed in the synthesis of a first-generation molecular motor . [ 7 ] In another example, the Nicolaou's total synthesis of Taxol uses this reaction, although coupling stops with the formation of a cis-diol, rather than an olefin. Optimized procedures employ the dimethoxyethane complex of TiCl 3 in combination with the Zn(Cu) . The first porphyrin isomer, porphycene, was synthesised by McMurry coupling. [ 8 ]
https://en.wikipedia.org/wiki/McMurry_reaction
In automata theory , McNaughton's theorem refers to a theorem that asserts that the set of ω-regular languages is identical to the set of languages recognizable by deterministic Muller automata . [ 1 ] This theorem is proven by supplying an algorithm to construct a deterministic Muller automaton for any ω-regular language and vice versa. This theorem has many important consequences. Since (non-deterministic) Büchi automata and ω-regular languages are equally expressive , the theorem implies that Büchi automata and deterministic Muller automata are equally expressive. Since complementation of deterministic Muller automata is trivial, the theorem implies that Büchi automata/ω-regular languages are closed under complementation. In McNaughton's original paper, the theorem was stated as: "An ω-event is regular if and only if it is finite-state." In modern terminology, ω-events are commonly referred to as ω-languages . Following McNaughton's definition, an ω-event is a finite-state event if there exists a deterministic Muller automaton that recognizes it. One direction of the theorem can be proven by showing that any given Muller automaton recognizes an ω-regular language. Suppose A = ( Q ,Σ,δ, q 0 , F ) is a deterministic Muller automaton . The union of finitely many ω-regular languages produces an ω-regular language; therefore it can be assumed without loss of generality that the Muller acceptance condition F contains exactly one set of states { q 1 , ... , q n }. Let α be the regular language whose elements will take A from q 0 to q 1 . For 1≤ i ≤ n , let β i be a regular language whose elements take A from q i to q ( i mod n )+1 without passing through any state outside of { q 1 , ... , q n }. It is claimed that α(β 1 ... β n ) ω is the ω-regular language recognized by the Muller automaton A . It is proved as follows. Suppose w is a word accepted by A . Let ρ be the run that led to the acceptance of w . For a time instant t , let ρ( t ) be the state visited by ρ at time t. We create an infinite and strictly increasing sequence of time instants t 1 , t 2 , ... such that only states in { q 1 , ... , q n '} appear after time t 1 , and for each a and b , ρ( t na + b ) = q b . Such a sequence exists because all and only the states of { q 1 , ... , q n } appear in ρ infinitely often. By the above definitions of α and β's, it can be easily shown that the existence of such a sequence implies that w is an element of α(β 1 ... β n ) ω . Conversely, suppose w ∈ α(β 1 ... β n ) ω . Due to definition of α, there is an initial segment of w that is an element of α and thus leads A to the state q 1 . From there on, the run never assumes a state outside of { q 1 , ... , q n }, due to the definitions of the β's, and all the states in the set are repeated infinitely often. Therefore, A accepts the word w . The other direction of the theorem can be proven by showing that there exists a deterministic Muller automaton that recognizes a given ω-regular language. The union of finitely many deterministic Muller automata can be easily constructed; therefore without loss of generality we assume that the given ω-regular language is of the form αβ ω . Consider an ω-word w = a 1 a 2 ... ∈ αβ ω . Let w ( i , j ) be the finite segment a i +1 ,..., a j -1 a j of w . For building a Muller automaton for αβ ω , we introduce the following two concepts with respect to w . Let p be the number of states in the minimum deterministic finite automaton A β* to recognize language β*. Now we prove two lemmas about the above two concepts. We have used both the concepts of "favor" and "equivalence" in Lemma 2. Now, we are going to use the lemma to construct a Muller automaton for language αβ ω . The proposed automaton will accept a word if and only if a time i exists such that it will satisfy the right hand side of Lemma 2. The machine below is described informally. Note that this machine will be a deterministic Muller automaton. The machine contains p+2 deterministic finite automaton and a master controller, where p is the size of A β* . One of the p+2 machine can recognize αβ* and this machine gets input in every cycle. And, it communicates at any time i to the master controller whether or not w (0,i) ∈ αβ*. The rest of p+1 machines are copies of A β* . The master can set the A β* machines dormant or active. If master sets a A β* machine to be dormant then it remains in its initial state and oblivious to the input. If master activates a A β* machine then it reads the input and moves, until master makes it dormant and force it back to the initial state. Master can make a A β* machine active and dormant as many times as it wants. The master stores the following information about the A β* machines at each time instant. Initially, the master may behave 2 different ways depending on α. If α contains empty word then only one of the A β* is active otherwise none of the A β* machines are active at the start. Later at some time i, if w(0,i) ∈ αβ* and none of A β* machines are in initial state then master activates one of the dormant machines and the just activated A β* machine start receiving input from time i+1. At some time, if two A β* machines reach to the same state then master makes the machine dormant that was activated later than the other. Note that the master can make the above decisions using the information it stores. For the output, the master also have a pair of red and green lights corresponding to each A β* machine. If a A β* machine goes from active state to dormant state then corresponding red light flashes. The green light for some A β* machine M, which was activated at j, flashes at time i in the following two situations: Note that the green light for M does not flash every time when a machine goes dormant due to M. The above description of a full machine can be viewed as a large deterministic automaton. Now, it is left to define the Muller acceptance condition. In this large automaton, we define μ n to be the set of states in which the green light flashes and the red light does not flash corresponding to n th A β* machine. Let ν n be the set of states in which the red light does not flash corresponding to n th A β* machine. So, Muller acceptance condition F = { S | ∃n μ n ⊆ S ⊆ ν n }. This finishes the construction of the desired Muller automaton. Q.E.D. Since McNaughton's proof, many other proofs have been proposed. The following are some of them.
https://en.wikipedia.org/wiki/McNaughton's_theorem
In the general theory of relativity , the McVittie metric is the exact solution of Einstein's field equations that describes a black hole or massive object immersed in an expanding cosmological spacetime . The solution was first fully obtained by George McVittie in the 1930s, while investigating the effect of the, then recently discovered, expansion of the Universe on a mass particle. The simplest case of a spherically symmetric solution to the field equations of General Relativity with a cosmological constant term , the Schwarzschild-De Sitter spacetime , arises as a specific case of the McVittie metric, with positive 3-space scalar curvature κ = + 1 {\displaystyle \kappa =+1} and constant Hubble parameter H ( t ) = H 0 {\displaystyle H(t)=H_{0}} . In isotropic coordinates , the McVittie metric is given by [ 1 ] where d Ω 2 {\displaystyle d\Omega ^{2}} is the usual line element for the euclidean sphere , M is identified as the mass of the massive object, a ( t ) {\displaystyle a(t)} is the usual scale factor found in the FLRW metric , which accounts for the expansion of the space-time; and K ( r ) {\displaystyle K(r)} is a curvature parameter related to the scalar curvature k {\displaystyle k} of the 3-space as which is related to the curvature of the 3-space exactly as in the FLRW spacetime . It is generally assumed that a ˙ ( t ) > 0 {\displaystyle {\dot {a}}(t)>0} , otherwise the Universe is undergoing a contraction . One can define the time-dependent mass parameter μ ( t ) ≡ G M / 2 c 2 a ( t ) r {\displaystyle \mu (t)\equiv GM/2c^{2}a(t)r} , which accounts for the mass density inside the expanding, comoving radius a ( t ) r {\displaystyle a(t)r} at time t {\displaystyle t} , to write the metric in a more succinct way From here on, it is useful to define m = G M / c 2 {\displaystyle m=GM/c^{2}} . For McVittie metrics with the general expanding FLRW solutions properties H ( t ) = a ˙ ( t ) / a ( t ) > 0 {\displaystyle H(t)={\dot {a}}(t)/a(t)>0} and lim t → ∞ H ( t ) = H 0 = 0 {\displaystyle \lim _{t\rightarrow \infty }H(t)=H_{0}=0} , the spacetime has the property of containing at least two singularities . One is a cosmological, null-like naked singularity at the smallest positive root r − {\displaystyle r_{-}} of the equation 1 − 2 m / r − H 0 2 r 2 = 0 {\displaystyle 1-2m/r-H_{0}^{2}r^{2}=0} . This is interpreted as the black hole event-horizon in the case where H 0 > 0 {\displaystyle H_{0}>0} . [ 2 ] For the H 0 > 0 {\displaystyle H_{0}>0} case, there is an event horizon at r = r − {\displaystyle r=r_{-}} , but no singularity, which is extinguished by the existence of an asymptotic Schwarzschild-De Sitter phase of the spacetime. [ 2 ] The second singularity lies at the causal past of all events in the space-time, and is a space-time singularity at r = 2 m , μ ( t ) = 1 {\displaystyle r=2m,\mu (t)=1} , which, due to its causal past nature, is interpreted as the usual Big-Bang like singularity. There are also at least two event horizons: one at the largest solution of 1 − 2 m / r − H 0 2 r 2 = 0 {\displaystyle 1-2m/r-H_{0}^{2}r^{2}=0} , and space-like, protecting the Big-Bang singularity at finite past time; and one at the r = r − {\displaystyle r=r_{-}} smallest root of the equation, also at finite time. The second event horizon becomes a black hole horizon for the H 0 > 0 {\displaystyle H_{0}>0} case. [ 2 ] One can obtain the Schwarzschild and Robertson-Walker metrics from the McVittie metric in the exact limits of k = 0 , a ˙ ( t ) = 0 {\displaystyle k=0,{\dot {a}}(t)=0} and μ ( t ) = 0 {\displaystyle \mu (t)=0} , respectively. In trying to describe the behavior of a mass particle in an expanding Universe, the original paper of McVittie a black hole spacetime with decreasing Schwarschild radius r s {\displaystyle r_{s}} for an expanding surrounding cosmological spacetime. [ 1 ] However, one can also interpret, in the limit of a small mass parameter μ ( t ) {\displaystyle \mu (t)} , a perturbed FLRW spacetime , with μ {\displaystyle \mu } the Newtonian perturbation. Below we describe how to derive these analogies between the Schwarzschild and FLRW spacetimes from the McVittie metric. In the case of a flat 3-space, with scalar curvature constant k = 0 {\displaystyle k=0} , the metric (1) becomes which, for each instant of cosmic time t 0 {\displaystyle t_{0}} , is the metric of the region outside of a Schwarzschild black hole in isotropic coordinates, with Schwarzschild radius r S = 2 G M a ( t 0 ) c 2 {\displaystyle r_{S}={\dfrac {2GM}{a(t_{0})c^{2}}}} . To make this equivalence more explicit, one can make the change of radial variables to obtain the metric in Schwarzschild coordinates : The interesting feature of this form of the metric is that one can clearly see that the Schwarzschild radius, which dictates at which distance from the center of the massive body the event horizon is formed, shrinks as the Universe expands. For a comoving observer , which accompanies the Hubble flow this effect is not perceptible, as its radial coordinate is given by r ( comov ) ′ = a ( t ) r ′ {\displaystyle r'_{({\text{comov}})}=a(t)r'} , such that, for the comoving observer, r S = 2 M / r ( comov ) ′ {\displaystyle r_{S}=2M/r'_{({\text{comov}})}} is constant, and the Event Horizon will remain static. In the case of a vanishing mass parameter μ ( t ) = 0 {\displaystyle \mu (t)=0} , the McVittie metric becomes exactly the FLRW metric in spherical coordinates which leads to the exact Friedmann equations for the evolution of the scale factor a ( t ) {\displaystyle a(t)} . If one takes the limit of the mass parameter μ ( t ) = M / 2 a ( t ) r ≪ 1 {\displaystyle \mu (t)=M/2a(t)r\ll 1} , the metric (1) becomes which can be mapped to a perturberd FLRW spacetime in Newtonian gauge , with perturbation potential Φ = 2 μ ( t ) {\displaystyle \Phi =2\mu (t)} ; that is, one can understand the small mass of the central object as the perturbation in the FLRW metric.
https://en.wikipedia.org/wiki/McVittie_metric
Acetone ( 2-propanone or dimethyl ketone ) is an organic compound with the formula (CH 3 ) 2 CO . [ 22 ] It is the simplest and smallest ketone ( R−C(=O)−R' ). It is a colorless, highly volatile , and flammable liquid with a characteristic pungent odor. [ 23 ] Acetone is miscible with water and serves as an important organic solvent in industry, home, and laboratory. About 6.7 million tonnes were produced worldwide in 2010, mainly for use as a solvent and for production of methyl methacrylate and bisphenol A , which are precursors to widely used plastics . [ 24 ] [ 25 ] It is a common building block in organic chemistry . It serves as a solvent in household products such as nail polish remover and paint thinner . It has volatile organic compound (VOC)-exempt status in the United States. [ 26 ] Acetone is produced and disposed of in the human body through normal metabolic processes. Small quantities of it are present naturally in blood and urine. People with diabetic ketoacidosis produce it in larger amounts. Ketogenic diets that increase ketone bodies (acetone, β-hydroxybutyric acid and acetoacetic acid ) in the blood are used to counter epileptic attacks in children who suffer from refractory epilepsy. [ 27 ] From the 17th century, and before modern developments in organic chemistry nomenclature , acetone was given many different names. They included "spirit of Saturn", which was given when it was thought to be a compound of lead and, later, "pyro-acetic spirit" and "pyro-acetic ester". [ 6 ] Prior to the name "acetone" being coined by French chemists (see below ), it was named "mesit" (from the Greek μεσίτης, meaning mediator) by Carl Reichenbach , who also claimed that methyl alcohol consisted of mesit and ethyl alcohol . [ 28 ] [ 6 ] Names derived from mesit include mesitylene and mesityl oxide which were first synthesised from acetone. In 1839, the name "acetone" began to be used and the term was composed of “daughter of” and acetum (acetic acid) because it was obtained from acetic acid. [ 23 ] Unlike many compounds with the acet- prefix which have a 2-carbon chain, acetone has a 3-carbon chain. That has caused confusion because there cannot be a ketone with 2 carbons. The prefix refers to acetone's relation to vinegar ( acetum in Latin , also the source of the words "acid" and "acetic"), rather than its chemical structure. [ 29 ] Acetone was first produced by Andreas Libavius in 1606 by distillation of lead(II) acetate . [ 30 ] [ 31 ] In 1832, French chemist Jean-Baptiste Dumas and German chemist Justus von Liebig determined the empirical formula for acetone. [ 32 ] [ 33 ] In 1833, French chemists Antoine Bussy and Michel Chevreul decided to name acetone by adding the suffix -one to the stem of the corresponding acid (viz, acetic acid ) just as a similarly prepared product of what was then confused with margaric acid was named margarone. [ 34 ] [ 29 ] By 1852, English chemist Alexander William Williamson realized that acetone was methyl acetyl ; [ 35 ] the following year, the French chemist Charles Frédéric Gerhardt concurred. [ 36 ] In 1865, the German chemist August Kekulé published the modern structural formula for acetone. [ 37 ] [ 38 ] Johann Josef Loschmidt had presented the structure of acetone in 1861, [ 39 ] but his privately published booklet received little attention. During World War I , Chaim Weizmann developed the process for industrial production of acetone (Weizmann Process). [ 40 ] In 2010, the worldwide production capacity for acetone was estimated at 6.7 million tonnes per year. [ 41 ] With 1.56 million tonnes per year, the United States had the highest production capacity, [ 42 ] followed by Taiwan and China . The largest producer of acetone is INEOS Phenol , owning 17% of the world's capacity, with also significant capacity (7–8%) by Mitsui , Sunoco and Shell in 2010. [ 41 ] INEOS Phenol also owns the world's largest production site (420,000 tonnes/annum) in Beveren (Belgium). Spot price of acetone in summer 2011 was 1100–1250 USD/tonne in the United States. [ 43 ] Acetone is produced directly or indirectly from propene . Approximately 83% of acetone is produced via the cumene process ; [ 25 ] as a result, acetone production is tied to phenol production. In the cumene process, benzene is alkylated with propylene to produce cumene , which is oxidized by air to produce phenol and acetone: Other processes involve the direct oxidation of propylene ( Wacker-Hoechst process ), or the hydration of propylene to give 2-propanol , which is oxidized (dehydrogenated) to acetone. [ 25 ] Previously, acetone was produced by the dry distillation of acetates , for example calcium acetate in ketonic decarboxylation . After that time, during World War I , acetone was produced using acetone-butanol-ethanol fermentation with Clostridium acetobutylicum bacteria , which was developed by Chaim Weizmann (later the first president of Israel ) in order to help the British war effort, [ 25 ] [ 44 ] in the preparation of Cordite . [ 45 ] This acetone-butanol-ethanol fermentation was eventually abandoned when newer methods with better yields were found. [ 25 ] Acetone is reluctant to form a hydrate: [ 46 ] Like most ketones, acetone exhibits the keto–enol tautomerism in which the nominal keto structure (CH 3 ) 2 C=O of acetone itself is in equilibrium with the enol isomer (CH 3 )C(OH)=(CH 2 ) ( prop-1-en-2-ol ). In acetone vapor at ambient temperature, only 2.4 × 10 −7 % of the molecules are in the enol form. [ 47 ] In the presence of suitable catalysts , two acetone molecules also combine to form the compound diacetone alcohol (CH 3 )C=O(CH 2 )C(OH)(CH 3 ) 2 , which on dehydration gives mesityl oxide (CH 3 )C=O(CH)=C(CH 3 ) 2 . This product can further combine with another acetone molecule, with loss of another molecule of water, yielding phorone and other compounds. [ 48 ] Acetone is a weak Lewis base that forms adducts with soft acids like I 2 and hard acids like phenol . Acetone also forms complexes with divalent metals. [ 49 ] [ 50 ] Under ultraviolet light, acetone fluoresces. [ 51 ] The flame temperature of pure acetone is 1980 °C. [ 52 ] At its melting point (−96 °C) is claimed to polymerize to give a white elastic solid, soluble in acetone, stable for several hours at room temperature. To do so, a vapor of acetone is co-condensed with magnesium as a catalyst onto a very cold surface. [ 53 ] [ 54 ] [ 55 ] Humans exhale several milligrams of acetone per day. It arises from decarboxylation of acetoacetate . [ 56 ] [ 57 ] Small amounts of acetone are produced in the body by the decarboxylation of ketone bodies . Certain dietary patterns, including prolonged fasting and high-fat low-carbohydrate dieting, can produce ketosis , in which acetone is formed in body tissue. Certain health conditions, such as alcoholism and diabetes, can produce ketoacidosis , uncontrollable ketosis that leads to a sharp, and potentially fatal, increase in the acidity of the blood. Since it is a byproduct of fermentation, acetone is a byproduct of the distillery industry. [ 56 ] Acetone can then be metabolized either by CYP2E1 via methylglyoxal to D -lactate and pyruvate , and ultimately glucose /energy, or by a different pathway via propylene glycol to pyruvate , lactate , acetate (usable for energy) and propionaldehyde . [ 58 ] [ 59 ] [ 60 ] About a third of the world's acetone is used as a solvent, and a quarter is consumed as acetone cyanohydrin , a precursor to methyl methacrylate . [ 24 ] Acetone is used to synthesize methyl methacrylate . It begins with the initial conversion of acetone to acetone cyanohydrin via reaction with hydrogen cyanide (HCN): In a subsequent step, the nitrile is hydrolyzed to the unsaturated amide , which is esterified : The third major use of acetone (about 20%) [ 24 ] is synthesizing bisphenol A . Bisphenol A is a component of many polymers such as polycarbonates , polyurethanes , and epoxy resins . The synthesis involves the condensation of acetone with phenol : Many millions of kilograms of acetone are consumed in the production of the solvents methyl isobutyl alcohol and methyl isobutyl ketone . These products arise via an initial aldol condensation to give diacetone alcohol . [ 25 ] Condensation with acetylene gives 2-methylbut-3-yn-2-ol , precursor to synthetic terpenes and terpenoids . [ 61 ] Acetone is a good solvent for many plastics and some synthetic fibers. It is used for thinning polyester resin , cleaning tools used with it, and dissolving two-part epoxies and superglue before they harden. It is used as one of the volatile components of some paints and varnishes . [ 23 ] As a heavy-duty degreaser, it is useful in the preparation of metal prior to painting or soldering , and to remove rosin flux after soldering (to prevent adhesion of dirt and electrical leakage and perhaps corrosion or for cosmetic reasons), although it may attack some electronic components, such as polystyrene capacitors. [ 62 ] Although itself flammable , acetone is used extensively as a solvent for the safe transportation and storage of acetylene , which cannot be safely pressurized as a pure compound. Vessels containing a porous material are first filled with acetone followed by acetylene, which dissolves into the acetone. One litre of acetone can dissolve around 250 litres of acetylene at a pressure of 10 bars (1.0 MPa). [ 63 ] [ 64 ] Acetone is used as a solvent by the pharmaceutical industry and as a denaturant in denatured alcohol . [ 65 ] Acetone is also present as an excipient in some pharmaceutical drugs . [ 66 ] [ needs update ] A variety of organic reactions employ acetone as a polar , aprotic solvent , e.g. the Jones oxidation . Because acetone is cheap, volatile, and dissolves or decomposes with most laboratory chemicals, an acetone rinse is the standard technique to remove solid residues from laboratory glassware before a final wash. [ 67 ] Despite common desiccatory use, acetone dries only via bulk displacement and dilution. It forms no azeotropes with water (see azeotrope tables ). [ 68 ] Acetone also removes certain stains from microscope slides . [ 69 ] Acetone freezes well below −78 °C. An acetone/dry ice mixture cools many low-temperature reactions. [ 70 ] Make-up artists use acetone to remove skin adhesive from the netting of wigs and mustaches by immersing the item in an acetone bath, then removing the softened glue residue with a stiff brush. [ 71 ] Acetone is a main ingredient in many nail polish removers because it breaks down nail polish. [ 72 ] It is used for all types of nail polish removal, like gel nail polish, dip powder and acrylic nails. [ 73 ] Proteins precipitate in acetone. [ 74 ] The chemical modifies peptides, both at α- or ε-amino groups, and in a poorly understood but rapid modification of certain glycine residues. [ 74 ] In pathology , acetone helps find lymph nodes in fatty tissues (such as the mesentery ) for tumor staging . [ 75 ] The liquid dissolves the fat and hardens the nodes, making them easier to find. [ 76 ] Dermatologists use acetone with alcohol for acne treatments to chemically peel dry skin. Common agents used today for chemical peeling are salicylic acid , glycolic acid , azelaic acid , 30% salicylic acid in ethanol , and trichloroacetic acid (TCA). Prior to chemexfoliation, the skin is cleaned and excess fat removed in a process called defatting. Acetone, hexachlorophene , or a combination of these agents was used in this process. [ 77 ] Acetone has been shown to have anticonvulsant effects in animal models of epilepsy , in the absence of toxicity, when administered in millimolar concentrations. [ 78 ] It has been hypothesized that the high-fat low-carbohydrate ketogenic diet used clinically to control drug-resistant epilepsy in children works by elevating acetone in the brain. [ 78 ] Because of their higher energy requirements, children have higher acetone production than most adults – and the younger the child, the higher the expected production. This indicates that children are not uniquely susceptible to acetone exposure. External exposures are small compared to the exposures associated with the ketogenic diet. [ 79 ] Acetone's most hazardous property is its extreme flammability. In small amounts, acetone burns with a dull blue flame ; in larger amounts, fuel evaporation causes incomplete combustion and a bright yellow flame . When hotter than acetone's flash point of −20 °C (−4 °F), air mixtures of 2.5‑12.8% acetone (by volume) may explode or cause a flash fire . Vapors can flow along surfaces to distant ignition sources and flash back. Static discharge may also ignite acetone vapors, though acetone has a very high ignition initiation energy and accidental ignition is rare. [ 80 ] Acetone's auto-ignition temperature is the relatively high 465 °C (869 °F); [ 19 ] moreover, auto-ignition temperature depends upon experimental conditions, such as exposure time, and has been quoted as high as 535 °C. [ 81 ] Even pouring or spraying acetone over red-glowing coal will not ignite it, due to the high vapour concentration and the cooling effect of evaporation. [ 80 ] Acetone should be stored away from strong oxidizers, such as concentrated nitric and sulfuric acid mixtures. [ 82 ] It may also explode when mixed with chloroform in the presence of a base. [ 83 ] [ clarification needed ] When oxidized without combustion, for example with hydrogen peroxide , acetone may form acetone peroxide , a highly unstable primary explosive . Acetone peroxide may be formed accidentally, e.g. when waste peroxide is poured into waste solvents. [ 84 ] Acetone occurs naturally as part of certain metabolic processes in the human body, and has been studied extensively and is believed to exhibit only slight toxicity in normal use. There is no strong evidence of chronic health effects if basic precautions are followed. [ 85 ] It is generally recognized to have low acute and chronic toxicity if ingested and/or inhaled. [ 86 ] Acetone is not currently regarded as a carcinogen , a mutagen , or a concern for chronic neurotoxicity effects. [ 80 ] Acetone can be found as an ingredient in a variety of consumer products ranging from cosmetics to processed and unprocessed foods. Acetone has been rated as a generally recognized as safe (GRAS) substance when present in drinks, baked foods, desserts, and preserves at concentrations ranging from 5 to 8 mg/L. [ 86 ] Acetone is however an irritant, causing mild skin and moderate-to-severe eye irritation. At high vapor concentrations, it may depress the central nervous system like many other solvents. [ 87 ] Acute toxicity for mice by ingestion (LD 50 ) is 3 g/kg, and by inhalation (LC 50 ) is 44 g/m 3 over 4 hours. [ 88 ] Although acetone occurs naturally in the environment in plants, trees, volcanic gases, forest fires, and as a product of the breakdown of body fat, [ 89 ] the majority of the acetone released into the environment is of industrial origin. [ clarification needed ] Acetone evaporates rapidly, even from water and soil. Once in the atmosphere, it has a 22-day half-life and is degraded by UV light via photolysis (primarily into methane and ethane . [ 90 ] ) Consumption by microorganisms contributes to the dissipation of acetone in soil, animals, or waterways. [ 89 ] In 1995, the United States Environmental Protection Agency (EPA) removed acetone from the list of volatile organic compounds . The companies requesting the removal argued that it would "contribute to the achievement of several important environmental goals and would support EPA's pollution prevention efforts", and that acetone could be used as a substitute for several compounds that are listed as hazardous air pollutants (HAP) under section 112 of the Clean Air Act . [ 91 ] In making its decision EPA conducted an extensive review of the available toxicity data on acetone, which was continued through the 2000s. It found that the evaluable "data are inadequate for an assessment of the human carcinogenic potential of acetone". [ 9 ] On 30 July 2015, scientists reported that upon the first touchdown of the Philae lander on comet 67P 's surface, measurements by the COSAC and Ptolemy instruments revealed sixteen organic compounds , four of which were seen for the first time on a comet, including acetamide , acetone, methyl isocyanate , and propionaldehyde . [ 92 ] [ 93 ] [ 94 ]
https://en.wikipedia.org/wiki/Me2CO
Dimethyl sulfate ( DMS ) is a chemical compound with formula (CH 3 O) 2 SO 2 . As the diester of methanol and sulfuric acid , its formula is often written as ( CH 3 ) 2 SO 4 or Me 2 SO 4 , where CH 3 or Me is methyl . Me 2 SO 4 is mainly used as a methylating agent in organic synthesis . Me 2 SO 4 is a colourless oily liquid with a slight onion-like odour. Like all strong alkylating agents , Me 2 SO 4 is toxic . [ 3 ] Its use as a laboratory reagent has been superseded to some extent by methyl triflate , CF 3 SO 3 CH 3 , the methyl ester of trifluoromethanesulfonic acid . Impure dimethyl sulfate was prepared in the early 19th century. [ 4 ] J. P. Claesson later extensively studied its preparation. [ 5 ] [ 6 ] It was investigated for possible use in chemical warfare in World War I [ 7 ] [ 8 ] in 75% to 25% mixture with methyl chlorosulfonate (CH 3 ClO 3 S) called "C-stoff" in Germany, or with chlorosulfonic acid called "Rationite" in France. [ 9 ] The esterification of sulfuric acid with methanol was described in 1835: [ 10 ] Dimethyl sulfate is produced commercially by the continuous reaction of dimethyl ether with sulfur trioxide : [ 3 ] Dimethyl sulfate can be synthesized in the laboratory by several methods. [ 11 ] The reaction of methyl nitrite and methyl chlorosulfonate also results in dimethyl sulfate: [ 6 ] Dimethyl sulfate is a reagent for the methylation of phenols , amines , and thiols . One methyl group is transferred more quickly than the second. Methyl transfer is assumed to occur via an S N 2 reaction. Compared to other methylating agents, dimethyl sulfate is preferred by the industry because of its low cost and high reactivity. Commonly dimethyl sulfate is employed to methylate phenols . [ 12 ] [ 13 ] In some cases, simple alcohols are also methylated, as illustrated by the conversion of tert -butanol to t -butyl methyl ether : The methylation of sugars is called Haworth methylation . [ 14 ] The methylation of ketones is called the Lavergne reaction. Me 2 SO 4 is used to prepare both quaternary ammonium salts or tertiary amines : Quaternized fatty ammonium compounds are used as a surfactant or fabric softener. Methylation to create a tertiary amine is illustrated as: [ 13 ] Thiolate salts are easily methylated by Me 2 SO 4 to give methyl thioethers : [ 13 ] In a related example: [ 15 ] This method has been used to prepare thioesters from thiocarboxylic acids : Dimethyl sulfate (DMS) is used to determine the secondary structure of RNA . At neutral pH, DMS methylates unpaired adenine and cytosine residues at their canonical Watson–Crick faces, but it cannot methylate base-paired nucleotides. Using the method known as DMS-MaPseq , [ 16 ] RNA is incubated with DMS to methylate unpaired bases. Then the RNA is reverse-transcribed; the reverse transcriptase frequently adds an incorrect DNA base when it encounters a methylated RNA base. These mutations can be detected via sequencing , and the RNA is inferred to be single-stranded at bases with above-background mutation rates. Dimethyl sulfate can affect the base-specific cleavage of DNA by attacking the imidazole rings present in guanine. [ 17 ] Dimethyl sulfate also methylates adenine in single-stranded portions of DNA (for example, those with proteins like RNA polymerase progressively melting and re-annealing the DNA). Upon re-annealing, these methyl groups interfere with adenine-guanine base-pairing. Nuclease S1 can then be used to cut the DNA in single-stranded regions (anywhere with a methylated adenine). This is an important technique for analyzing protein-DNA interactions. Although dimethyl sulfate is highly effective and affordable, its toxicity has encouraged the use of other methylating reagents. Methyl iodide is a reagent used for O -methylation, like dimethyl sulfate, but it is less hazardous and more expensive. [ 15 ] Dimethyl carbonate , which is far less reactive, has far lower toxicity compared to both dimethyl sulfate and methyl iodide. [ 18 ] High pressure can be used to accelerate methylation by dimethyl carbonate. In general, the toxicity of methylating agents is correlated with their efficiency as methyl transfer reagents. Dimethyl sulfate is carcinogenic [ 19 ] and mutagenic , highly poisonous , corrosive , and environmentally hazardous . [ 20 ] It is absorbed through the skin, mucous membranes, and gastrointestinal tract, and can cause a fatal delayed respiratory tract reaction. An ocular reaction is also common. There is no strong odor or immediate irritation to warn of lethal concentration in the air. The LD 50 (acute, oral) is 205 mg/kg (rat) and 140 mg/kg (mouse), and LC 50 (acute) is 45 ppm per 4 hours (rat). [ 21 ] The vapor pressure of 65 Pa [ 22 ] is sufficiently large to produce a lethal concentration in air by evaporation at 20 °C. Delayed toxicity allows potentially fatal exposures to occur prior to development of any warning symptoms. [ 20 ] Symptoms may be delayed 6–24 h. Concentrated solutions of bases (ammonia, alkalis) can be used to hydrolyze minor spills and residues on contaminated equipment, but the reaction may become violent with larger amounts of dimethyl sulfate (see ICSC). Although the compound hydrolyses, treatment with water cannot be assumed to decontaminate it. One hypothesis regarding the apparently mysterious 1994 "toxic lady" incident is that the person at the centre of the incident had built up dimethyl sulfone crystals in her blood, which were converted by an unknown mechanism to dimethyl sulfate vapour that poisoned attending medical staff. [ 23 ] [ 24 ]
https://en.wikipedia.org/wiki/Me2SO4
Trimethylborane (TMB) is a toxic, pyrophoric gas with the formula B(CH 3 ) 3 (which can also be written as Me 3 B, with Me representing methyl ). As a liquid it is colourless. The strongest line in the infrared spectrum is at 1330 cm −1 followed by lines at 3010 cm −1 and 1185 cm −1 . Its melting point is −161.5 °C, and its boiling point is −20.2 °C. Vapour pressure is given by log P = 6.1385 + 1.75 log T − 1393.3/ T − 0.007735 T , where T is temperature in kelvins . [ 5 ] Molecular weight is 55.914. The heat of vapourisation is 25.6 kJ/mol. [ 4 ] Trimethylborane was first described in 1862 by Edward Frankland , [ 6 ] who also mentioned its adduct with ammonia. [ 7 ] Due to its dangerous nature the compound was no longer studied until 1921, when Alfred Stock and Friedrich Zeidler took advantage of the reaction between boron trichloride gas and dimethylzinc . [ 8 ] Although the substance can be prepared using Grignard reagents the output is contaminated by unwanted products from the solvent. Trimethylborane can be made on a small scale with a 98% yield by reacting trimethylaluminium in hexane with boron tribromide in dibutyl ether as a solvent. [ 5 ] Yet other methods are reacting tributyl borate with trimethylaluminium chloride, or potassium tetrafluoroborate with trimethylaluminium, [ 9 ] or adding boron trifluoride in ether to methyl magnesium iodide . [ 10 ] Trimethylborane spontaneously ignites in air if the concentration is high enough. It burns with a green flame producing soot. [ 11 ] Slower oxidation with oxygen in a solvent or in the gas phase can produce dimethyltrioxadiboralane, which contains a ring of two boron and three oxygen atoms. However the major product is dimethylborylmethylperoxide, which rapidly decomposes to dimethoxymethylborane. [ 12 ] Trimethylborane is a strong Lewis acid . B(CH 3 ) 3 can form an adduct with ammonia : (NH 3 ):B(CH 3 ) 3 . [ 13 ] as well as other Lewis bases. The Lewis acid properties of B(CH 3 ) 3 have been analyzed by the ECW model yielding E A = 2.90 and C A = 3.60. When trimethylborane forms an adduct with trimethylamine , steric repulsion between the methyl groups on the B and N results. The ECW model can provide a measure of this steric effect. Trimethylborane reacts with water and chlorine at room temperature. It also reacts with grease but not with teflon or glass. [ 5 ] Trimethylborane reacts with diborane to disproportionate to form methyldiborane and dimethyldiborane : (CH 3 )BH 2 .BH 3 and (CH 3 ) 2 BH.BH 3 . It reacts as a gas with trimethylphosphine to form a solid Lewis salt with a heat of formation of −41 kcal per mol. This adduct has a heat of sublimation of −24.6 kcal/mol. No reaction occurs with trimethylarsine or trimethylstibine . [ 10 ] Methyl lithium reacting with the Trimethylborane produces a tetramethylborate salt: LiB(CH 3 ) 4 . [ 14 ] The tetramethylborate ion has a negative charge and is isoelectronic with neopentane , tetramethylsilane , and the tetramethylammonium cation. Trimethylborane has been used as a neutron counter. [ 15 ] For this use it has to be very pure. [ 13 ] It is also used in chemical vapour deposition where boron and carbon need to be deposited together.
https://en.wikipedia.org/wiki/Me3B
MeRIPseq [ 1 ] (or MeRIP-seq ) stands for methylated RNA immunoprecipitation sequencing, which is a method for detection of post-transcriptional RNA modifications, developed by Kate Meyer et al. while working in the laboratory of Sammie Jaffrey at Cornell University Graduate School of Medical Sciences. It is also called m6A-seq. [ 2 ] A variation of the MerIP-seq method was coined by Benjamin Delatte and colleagues in 2016. This variant, called hMerIP-seq (hydroxymethylcytosine RNA immunoprecipitation), uses an antibody that specifically recognizes 5-hydroxymethylcytosine, a modified RNA base affecting in vitro translation and brain development in Drosophila. [ 3 ]
https://en.wikipedia.org/wiki/MeRIPseq
Meagher Electronics was a Monterey, California , company which was founded in 1947 by Jim Meagher. It included a recording studio which recorded early demos for Joan Baez , her sister, Mimi Farina and her sister's husband, Richard Farina . The company also repaired all sorts of home entertainment equipment, focusing on professional and semi professional sound equipment and high end home systems. It had a huge, high warehouse space in which literally hundreds of old wooden console radios and phonographs dating back to the 1920s were stacked to the rafters. Meagher used to explain that these had been left by customers who chose not to pick them up instead of paying the repair estimate charges. [ 1 ] The firm was also a commercial sound installation company and one of the first Altec Lansing dealers in the country, with catalogues and equipment going back to 1947. It provided the sound system for most of the concerts and live events in the Monterey area in the 1950s through the 1970s, from folk to jazz to Roger Williams It also recorded the first gold jazz album, Erroll Garner 's Concert by the Sea in the mid-1950s on a portable mono Ampex 601 tape recorder which remained a prize possession for many years. The Monterey Jazz Festival contracted Meagher to provide their sound reinforcement system from the beginning of its existence in 1958 and the firm was extremely conscientious about providing the best quality sound possible, often using recording quality condenser microphones and custom designed loudspeaker arrays. The company supplied sound reinforcement systems for the Big Sur Folk Festivals and assisted Harry McCune Sound from San Francisco and their sound designer Abe Jacob , who was contracted to provide the sound system for the Monterey Pop Festival in 1967. [ 2 ]
https://en.wikipedia.org/wiki/Meagher_Electronics
Meal powder [ 1 ] is the fine dust left over when black powder ( gunpowder ) is corned and screened to separate it into different grain sizes. It is used extensively in various pyrotechnic procedures, usually to prime other compositions. It can also be used in many fireworks to add power and substantially increasing the height of the firework. The powder has occasionally been used as a synonym for Serpentine powder , which it physically resembles. [ 2 ] 'Mill meal' powder is a mixture of potassium nitrate , charcoal and sulfur in the correct proportions (75% potassium nitrate:15% charcoal:10% sulfur) which has been ball-milled to mix it intimately. It is used in the same way as commercial meal powder or can be pressed and corned to produce true black powder. Meal powder is made by mixing the ingredients by mass, rather than volume. These ingredients are processed in a ball mill , basically a rotating drum with non-sparking ceramic or lead balls. The more time spent in the mill, the more effective the powder will be. One main reason to ball mill as opposed to other methods is because it presses sulfur and KNO 3 into the porous charcoal. This explosives -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Meal_powder
Mean-field game theory is the study of strategic decision making by small interacting agents in very large populations. It lies at the intersection of game theory with stochastic analysis and control theory . The use of the term "mean field" is inspired by mean-field theory in physics, which considers the behavior of systems of large numbers of particles where individual particles have negligible impacts upon the system. In other words, each agent acts according to his minimization or maximization problem taking into account other agents’ decisions and because their population is large we can assume the number of agents goes to infinity and a representative agent exists. [ 1 ] In traditional game theory , the subject of study is usually a game with two players and discrete time space, and extends the results to more complex situations by induction. However, for games in continuous time with continuous states (differential games or stochastic differential games) this strategy cannot be used because of the complexity that the dynamic interactions generate. On the other hand with MFGs we can handle large numbers of players through the mean representative agent and at the same time describe complex state dynamics. This class of problems was considered in the economics literature by Boyan Jovanovic and Robert W. Rosenthal , [ 2 ] in the engineering literature by Minyi Huang, Roland Malhame, and Peter E. Caines [ 3 ] [ 4 ] [ 5 ] and independently and around the same time by mathematicians Jean-Michel Lasry [ fr ] and Pierre-Louis Lions . [ 6 ] [ 7 ] In continuous time a mean-field game is typically composed of a Hamilton–Jacobi–Bellman equation that describes the optimal control problem of an individual and a Fokker–Planck equation that describes the dynamics of the aggregate distribution of agents. Under fairly general assumptions it can be proved that a class of mean-field games is the limit as N → ∞ {\displaystyle N\to \infty } of an N -player Nash equilibrium . [ 8 ] A related concept to that of mean-field games is "mean-field-type control". In this case, a social planner controls the distribution of states and chooses a control strategy. The solution to a mean-field-type control problem can typically be expressed as a dual adjoint Hamilton–Jacobi–Bellman equation coupled with Kolmogorov equation . Mean-field-type game theory is the multi-agent generalization of the single-agent mean-field-type control. [ 9 ] The following system of equations [ 10 ] can be used to model a typical Mean-field game: { − ∂ t u − ν Δ u + H ( x , m , D u ) = 0 ( 1 ) ∂ t m − ν Δ m − div ⁡ ( D p H ( x , m , D u ) m ) = 0 ( 2 ) m ( 0 ) = m 0 ( 3 ) u ( x , T ) = G ( x , m ( T ) ) ( 4 ) {\displaystyle {\begin{cases}-\partial _{t}u-\nu \Delta u+H(x,m,Du)=0&(1)\\\partial _{t}m-\nu \Delta m-\operatorname {div} (D_{p}H(x,m,Du)m)=0&(2)\\m(0)=m_{0}&(3)\\u(x,T)=G(x,m(T))&(4)\end{cases}}} The basic dynamics of this set of Equations can be explained by an average agent's optimal control problem. In a mean-field game, an average agent can control their movement α {\displaystyle \alpha } to influence the population's overall location by: d X t = α t d t + 2 ν d B t {\displaystyle dX_{t}=\alpha _{t}dt+{\sqrt {2\nu }}dB_{t}} where ν {\displaystyle \nu } is a parameter and B t {\displaystyle B_{t}} is a standard Brownian motion. By controlling their movement, the agent aims to minimize their overall expected cost C {\displaystyle C} throughout the time period [ 0 , T ] {\displaystyle [0,T]} : C = E [ ∫ 0 T L ( X s , α s , m ( s ) ) d s + G ( X T , m ( T ) ) ] {\displaystyle C=\mathbb {E} \left[\int _{0}^{T}L(X_{s},\alpha _{s},m(s))ds+G(X_{T},m(T))\right]} where L ( X s , α s , m ( s ) ) {\displaystyle L(X_{s},\alpha _{s},m(s))} is the running cost at time s {\displaystyle s} and G ( X T , m ( T ) ) {\displaystyle G(X_{T},m(T))} is the terminal cost at time T {\displaystyle T} . By this definition, at time t {\displaystyle t} and position x {\displaystyle x} , the value function u ( t , x ) {\displaystyle u(t,x)} can be determined as: u ( t , x ) = inf α E [ ∫ t T L ( X s , α s , m ( s ) ) d s + G ( X T , m ( T ) ) ] {\displaystyle u(t,x)=\inf _{\alpha }\mathbb {E} \left[\int _{t}^{T}L(X_{s},\alpha _{s},m(s))ds+G(X_{T},m(T))\right]} Given the definition of the value function u ( t , x ) {\displaystyle u(t,x)} , it can be tracked by the Hamilton-Jacobi equation (1). The optimal action of the average players α ∗ ( x , t ) {\displaystyle \alpha ^{*}(x,t)} can be determined as α ∗ ( x , t ) = D p H ( x , m , D u ) {\displaystyle \alpha ^{*}(x,t)=D_{p}H(x,m,Du)} . As all agents are relatively small and cannot single-handedly change the dynamics of the population, they will individually adapt the optimal control and the population would move in that way. This is similar to a Nash Equilibrium, in which all agents act in response to a specific set of others' strategies. The optimal control solution then leads to the Kolmogorov-Fokker-Planck equation (2). A prominent category of mean field is games with a finite number of states and a finite number of actions per player. For those games, the analog of the Hamilton-Jacobi-Bellman equation is the Bellman equation, and the discrete version of the Fokker-Planck equation is the Kolmogorov equation. Specifically, for discrete-time models, the players' strategy is the Kolmogorov equation's probability matrix. In continuous time models, players have the ability to control the transition rate matrix. A discrete mean field game can be defined by a tuple G = ( E , A , { Q a } , m 0 , { c a } , β ) {\displaystyle {\mathcal {G}}=({\mathcal {E}},{\mathcal {A}},\{Q_{a}\},{\bf {m}}_{0},\{c_{a}\},\beta )} , where E {\displaystyle {\mathcal {E}}} is the state space, A {\displaystyle {\mathcal {A}}} the action set, Q a {\displaystyle Q_{a}} the transition rate matrices, m 0 {\displaystyle {\bf {m}}_{0}} the initial state, { c a } {\displaystyle \{c_{a}\}} the cost functions and β {\displaystyle \beta } ∈ R {\displaystyle \in \mathbb {R} } a discount factor. Furthermore, a mixed strategy is a measurable function π : E × R + → P ( A ) {\displaystyle \pi :\mathbb {E} \times \mathbb {R} ^{+}{\xrightarrow[{}]{}}{\mathcal {P(A)}}} , that associates to each state i ∈ E {\displaystyle i\in {\mathcal {E}}} and each time t ≥ 0 {\displaystyle t\geq 0} a probability measure π i ( t ) ∈ P ( A ) {\displaystyle \pi _{i}(t)\in {\mathcal {P(A)}}} on the set of possible actions. Thus π i , a ( t ) {\displaystyle \pi _{i,a}(t)} is the probability that, at time t {\displaystyle t} a player in state i {\displaystyle i} takes action a {\displaystyle a} , under strategy π {\displaystyle \pi } . Additionally, rate matrices { Q a ( m π ( t ) ) } a ∈ A {\displaystyle \{Q_{a}({\bf {m}}^{\pi }(t))\}_{a\in {\mathcal {A}}}} define the evolution over the time of population distribution, where m π ( t ) ∈ P ( E ) {\displaystyle {\bf {m}}^{\pi }(t)\in {\mathcal {P({\mathcal {E}})}}} is the population distribution at time t {\displaystyle t} . [ 11 ] From Caines (2009), a relatively simple model of large-scale games is the linear-quadratic Gaussian model. The individual agent's dynamics are modeled as a stochastic differential equation d X i = ( a i X i + b i u i ) d t + σ i d W i , i = 1 , … , N , {\displaystyle dX_{i}=(a_{i}X_{i}+b_{i}u_{i})\,dt+\sigma _{i}\,dW_{i},\quad i=1,\dots ,N,} where X i {\displaystyle X_{i}} is the state of the i {\displaystyle i} -th agent, u i {\displaystyle u_{i}} is the control of the i {\displaystyle i} -th agent, and W i {\displaystyle W_{i}} are independent Wiener processes for all i = 1 , … , N {\displaystyle i=1,\dots ,N} . The individual agent's cost is J i ( u i , ν ) = E { ∫ 0 ∞ e − ρ t [ ( X i − ν ) 2 + r u i 2 ] d t } , ν = Φ ( 1 N ∑ k ≠ i N X k + η ) . {\displaystyle J_{i}(u_{i},\nu )=\mathbb {E} \left\{\int _{0}^{\infty }e^{-\rho t}\left[(X_{i}-\nu )^{2}+ru_{i}^{2}\right]\,dt\right\},\quad \nu =\Phi \left({\frac {1}{N}}\sum _{k\neq i}^{N}X_{k}+\eta \right).} The coupling between agents occurs in the cost function. The paradigm of Mean Field Games has become a major connection between distributed decision-making and stochastic modeling. Starting out in the stochastic control literature, it is gaining rapid adoption across a range of applications, including: a. Financial market Carmona reviews applications in financial engineering and economics that can be cast and tackled within the framework of the MFG paradigm. [ 12 ] Carmona argues that models in macroeconomics, contract theory, finance, …, greatly benefit from the switch to continuous time from the more traditional discrete-time models. He considers only continuous time models in his review chapter, including systemic risk, price impact, optimal execution, models for bank runs, high-frequency trading, and cryptocurrencies. b. Crowd motions MFG assumes that individuals are smart players which try to optimize their strategy and path with respect to certain costs (equilibrium with rational expectations approach). MFG models are useful to describe the anticipation phenomenon: the forward part describes the crowd evolution while the backward gives the process of how the anticipations are built. Additionally, compared to multi-agent microscopic model computations, MFG only requires lower computational costs for the macroscopic simulations. Some researchers have turned to MFG in order to model the interaction between populations and study the decision-making process of intelligent agents, including aversion and congestion behavior between two groups of pedestrians, [ 13 ] departure time choice of morning commuters, [ 14 ] and decision-making processes for autonomous vehicle. [ 15 ] c. Control and mitigation of Epidemics Since the epidemic has affected society and individuals significantly, MFG and mean-field controls (MFCs) provide a perspective to study and understand the underlying population dynamics, especially in the context of the Covid-19 pandemic response. MFG has been used to extend the SIR-type dynamics with spatial effects or allowing for individuals to choose their behaviors and control their contributions to the spread of the disease. MFC is applied to design the optimal strategy to control the virus spreading within a spatial domain, [ 16 ] control individuals’ decisions to limit their social interactions, [ 17 ] and support the government’s nonpharmaceutical interventions. [ 18 ]
https://en.wikipedia.org/wiki/Mean-field_game_theory
In physics and probability theory , Mean-field theory ( MFT ) or Self-consistent field theory studies the behavior of high-dimensional random ( stochastic ) models by studying a simpler model that approximates the original by averaging over degrees of freedom (the number of values in the final calculation of a statistic that are free to vary). Such models consider many individual components that interact with each other. The main idea of MFT is to replace all interactions to any one body with an average or effective interaction, sometimes called a molecular field . [ 1 ] This reduces any many-body problem into an effective one-body problem . The ease of solving MFT problems means that some insight into the behavior of the system can be obtained at a lower computational cost. MFT has since been applied to a wide range of fields outside of physics, including statistical inference , graphical models , neuroscience , [ 2 ] artificial intelligence , epidemic models , [ 3 ] queueing theory , [ 4 ] computer-network performance and game theory , [ 5 ] as in the quantal response equilibrium [ citation needed ] . The idea first appeared in physics ( statistical mechanics ) in the work of Pierre Curie [ 6 ] and Pierre Weiss to describe phase transitions . [ 7 ] MFT has been used in the Bragg–Williams approximation, models on Bethe lattice , Landau theory , Curie-Weiss law for magnetic susceptibility, Flory–Huggins solution theory , and Scheutjens–Fleer theory . Systems with many (sometimes infinite) degrees of freedom are generally hard to solve exactly or compute in closed, analytic form, except for some simple cases (e.g. certain Gaussian random-field theories, the 1D Ising model ). Often combinatorial problems arise that make things like computing the partition function of a system difficult. MFT is an approximation method that often makes the original problem to be solvable and open to calculation, and in some cases MFT may give very accurate approximations. In field theory , the Hamiltonian may be expanded in terms of the magnitude of fluctuations around the mean of the field. In this context, MFT can be viewed as the "zeroth-order" expansion of the Hamiltonian in fluctuations. Physically, this means that an MFT system has no fluctuations, but this coincides with the idea that one is replacing all interactions with a "mean-field”. Quite often, MFT provides a convenient launch point for studying higher-order fluctuations. For example, when computing the partition function , studying the combinatorics of the interaction terms in the Hamiltonian can sometimes at best produce perturbation results or Feynman diagrams that correct the mean-field approximation. In general, dimensionality plays an active role in determining whether a mean-field approach will work for any particular problem. There is sometimes a critical dimension above which MFT is valid and below which it is not. Heuristically, many interactions are replaced in MFT by one effective interaction. So if the field or particle exhibits many random interactions in the original system, they tend to cancel each other out, so the mean effective interaction and MFT will be more accurate. This is true in cases of high dimensionality, when the Hamiltonian includes long-range forces, or when the particles are extended (e.g. polymers ). The Ginzburg criterion is the formal expression of how fluctuations render MFT a poor approximation, often depending upon the number of spatial dimensions in the system of interest. The formal basis for mean-field theory is the Bogoliubov inequality . This inequality states that the free energy of a system with Hamiltonian has the following upper bound: where S 0 {\displaystyle S_{0}} is the entropy , and F {\displaystyle F} and F 0 {\displaystyle F_{0}} are Helmholtz free energies . The average is taken over the equilibrium ensemble of the reference system with Hamiltonian H 0 {\displaystyle {\mathcal {H}}_{0}} . In the special case that the reference Hamiltonian is that of a non-interacting system and can thus be written as where ξ i {\displaystyle \xi _{i}} are the degrees of freedom of the individual components of our statistical system (atoms, spins and so forth), one can consider sharpening the upper bound by minimising the right side of the inequality. The minimising reference system is then the "best" approximation to the true system using non-correlated degrees of freedom and is known as the mean field approximation . For the most common case that the target Hamiltonian contains only pairwise interactions, i.e., where P {\displaystyle {\mathcal {P}}} is the set of pairs that interact, the minimising procedure can be carried out formally. Define Tr i ⁡ f ( ξ i ) {\displaystyle \operatorname {Tr} _{i}f(\xi _{i})} as the generalized sum of the observable f {\displaystyle f} over the degrees of freedom of the single component (sum for discrete variables, integrals for continuous ones). The approximating free energy is given by where P 0 ( N ) ( ξ 1 , ξ 2 , … , ξ N ) {\displaystyle P_{0}^{(N)}(\xi _{1},\xi _{2},\dots ,\xi _{N})} is the probability to find the reference system in the state specified by the variables ( ξ 1 , ξ 2 , … , ξ N ) {\displaystyle (\xi _{1},\xi _{2},\dots ,\xi _{N})} . This probability is given by the normalized Boltzmann factor where Z 0 {\displaystyle Z_{0}} is the partition function . Thus In order to minimise, we take the derivative with respect to the single-degree-of-freedom probabilities P 0 ( i ) {\displaystyle P_{0}^{(i)}} using a Lagrange multiplier to ensure proper normalization. The end result is the set of self-consistency equations where the mean field is given by Mean field theory can be applied to a number of physical systems so as to study phenomena such as phase transitions . [ 8 ] The Bogoliubov inequality, shown above, can be used to find the dynamics of a mean field model of the two-dimensional Ising lattice . A magnetisation function can be calculated from the resultant approximate free energy . [ 9 ] The first step is choosing a more tractable approximation of the true Hamiltonian. Using a non-interacting or effective field Hamiltonian, the variational free energy is By the Bogoliubov inequality, simplifying this quantity and calculating the magnetisation function that minimises the variational free energy yields the best approximation to the actual magnetisation. The minimiser is which is the ensemble average of spin. This simplifies to Equating the effective field felt by all spins to a mean spin value relates the variational approach to the suppression of fluctuations. The physical interpretation of the magnetisation function is then a field of mean values for individual spins. Consider the Ising model on a d {\displaystyle d} -dimensional lattice. The Hamiltonian is given by where the ∑ ⟨ i , j ⟩ {\displaystyle \sum _{\langle i,j\rangle }} indicates summation over the pair of nearest neighbors ⟨ i , j ⟩ {\displaystyle \langle i,j\rangle } , and s i , s j = ± 1 {\displaystyle s_{i},s_{j}=\pm 1} are neighboring Ising spins. Let us transform our spin variable by introducing the fluctuation from its mean value m i ≡ ⟨ s i ⟩ {\displaystyle m_{i}\equiv \langle s_{i}\rangle } . We may rewrite the Hamiltonian as where we define δ s i ≡ s i − m i {\displaystyle \delta s_{i}\equiv s_{i}-m_{i}} ; this is the fluctuation of the spin. If we expand the right side, we obtain one term that is entirely dependent on the mean values of the spins and independent of the spin configurations. This is the trivial term, which does not affect the statistical properties of the system. The next term is the one involving the product of the mean value of the spin and the fluctuation value. Finally, the last term involves a product of two fluctuation values. The mean field approximation consists of neglecting this second-order fluctuation term: These fluctuations are enhanced at low dimensions, making MFT a better approximation for high dimensions. Again, the summand can be re-expanded. In addition, we expect that the mean value of each spin is site-independent, since the Ising chain is translationally invariant. This yields The summation over neighboring spins can be rewritten as ∑ ⟨ i , j ⟩ = 1 2 ∑ i ∑ j ∈ n n ( i ) {\displaystyle \sum _{\langle i,j\rangle }={\frac {1}{2}}\sum _{i}\sum _{j\in nn(i)}} , where n n ( i ) {\displaystyle nn(i)} means "nearest neighbor of i {\displaystyle i} ", and the 1 / 2 {\displaystyle 1/2} prefactor avoids double counting, since each bond participates in two spins. Simplifying leads to the final expression where z {\displaystyle z} is the coordination number . At this point, the Ising Hamiltonian has been decoupled into a sum of one-body Hamiltonians with an effective mean field h eff. = h + J z m {\displaystyle h^{\text{eff.}}=h+Jzm} , which is the sum of the external field h {\displaystyle h} and of the mean field induced by the neighboring spins. It is worth noting that this mean field directly depends on the number of nearest neighbors and thus on the dimension of the system (for instance, for a hypercubic lattice of dimension d {\displaystyle d} , z = 2 d {\displaystyle z=2d} ). Substituting this Hamiltonian into the partition function and solving the effective 1D problem, we obtain where N {\displaystyle N} is the number of lattice sites. This is a closed and exact expression for the partition function of the system. We may obtain the free energy of the system and calculate critical exponents . In particular, we can obtain the magnetization m {\displaystyle m} as a function of h eff. {\displaystyle h^{\text{eff.}}} . We thus have two equations between m {\displaystyle m} and h eff. {\displaystyle h^{\text{eff.}}} , allowing us to determine m {\displaystyle m} as a function of temperature. This leads to the following observation: T c {\displaystyle T_{\text{c}}} is given by the following relation: T c = J z k B {\displaystyle T_{\text{c}}={\frac {Jz}{k_{B}}}} . This shows that MFT can account for the ferromagnetic phase transition. Similarly, MFT can be applied to other types of Hamiltonian as in the following cases: Variationally minimisation like mean field theory can be also be used in statistical inference. In mean field theory, the mean field appearing in the single-site problem is a time-independent scalar or vector quantity. However, this isn't always the case: in a variant of mean field theory called dynamical mean field theory (DMFT), the mean field becomes a time-dependent quantity. For instance, DMFT can be applied to the Hubbard model to study the metal–Mott-insulator transition.
https://en.wikipedia.org/wiki/Mean-field_theory
In mathematical analysis , the concept of a mean-periodic function is a generalization introduced in 1935 by Jean Delsarte [ 1 ] [ 2 ] of the concept of a periodic function . Further results were made by Laurent Schwartz and J-P Kahane. [ 3 ] [ 4 ] Consider a continuous complex -valued function f of a real variable. The function f is periodic with period a precisely if for all real x , we have f ( x ) − f ( x − a ) = 0 . This can be written as where μ {\displaystyle \mu } is the difference between the Dirac measures at 0 and a . The function f is mean-periodic if it satisfies the same equation (1), but where μ {\displaystyle \mu } is some arbitrary nonzero measure with compact (hence bounded) support. Equation (1) can be interpreted as a convolution , so that a mean-periodic function is a function f for which there exists a compactly supported (signed) Borel measure μ {\displaystyle \mu } for which f ∗ μ = 0 {\displaystyle f*\mu =0} . [ 4 ] There are several well-known equivalent definitions. [ 2 ] Mean-periodic functions are a separate generalization of periodic functions from the almost periodic functions . For instance, exponential functions are mean-periodic since exp( x +1) − e .exp( x ) = 0 , but they are not almost periodic as they are unbounded. Still, there is a theorem which states that any uniformly continuous bounded mean-periodic function is almost periodic (in the sense of Bohr). In the other direction, there exist almost periodic functions which are not mean-periodic. [ 2 ] If f is a mean periodic function, then it is the limit of a certain sequence of exponential polynomials which are finite linear combinations of term t^^n exp(at) where n is any non-negative integer and a is any complex number; also Df is a mean periodic function (ie mean periodic) and if h is an exponential polynomial, then the pointwise product of f and h is mean periodic). If f and g are mean periodic then f + g and the truncated convolution product of f and g is mean periodic. However, the pointwise product of f and g need not be mean periodic. If L(D) is a linear differential operator with constant co-efficients, and L(D)f = g, then f is mean periodic if and only if g is mean periodic. For linear differential difference equations such as Df(t) - af(t - b) = g where a is any complex number and b is a positive real number, then f is mean periodic if and only if g is mean periodic. [ 5 ] In work related to the Langlands correspondence , the mean-periodicity of certain (functions related to) zeta functions associated to an arithmetic scheme have been suggested to correspond to automorphicity of the related L-function. [ 6 ] There is a certain class of mean-periodic functions arising from number theory.
https://en.wikipedia.org/wiki/Mean-periodic_function
In probability and statistics , a mean-preserving spread ( MPS ) [ 1 ] is a change from one probability distribution A to another probability distribution B, where B is formed by spreading out one or more portions of A's probability density function or probability mass function while leaving the mean (the expected value ) unchanged. As such, the concept of mean-preserving spreads provides a stochastic ordering of equal-mean gambles (probability distributions) according to their degree of risk ; this ordering is partial , meaning that of two equal-mean gambles, it is not necessarily true that either is a mean-preserving spread of the other. Distribution A is said to be a mean-preserving contraction of B if B is a mean-preserving spread of A. Ranking gambles by mean-preserving spreads is a special case of ranking gambles by second-order stochastic dominance – namely, the special case of equal means: If B is a mean-preserving spread of A, then A is second-order stochastically dominant over B; and the converse holds if A and B have equal means. If B is a mean-preserving spread of A, then B has a higher variance than A and the expected values of A and B are identical; but the converse is not in general true, because the variance is a complete ordering while ordering by mean-preserving spreads is only partial. This example shows that to have a mean-preserving spread does not require that all or most of the probability mass move away from the mean. [ 2 ] Let A have equal probabilities 1 / 100 {\displaystyle 1/100} on each outcome x A i {\displaystyle x_{Ai}} , with x A i = 198 {\displaystyle x_{Ai}=198} for i = 1 , … , 50 {\displaystyle i=1,\dots ,50} and x A i = 202 {\displaystyle x_{Ai}=202} for i = 51 , … , 100 {\displaystyle i=51,\dots ,100} ; and let B have equal probabilities 1 / 100 {\displaystyle 1/100} on each outcome x B i {\displaystyle x_{Bi}} , with x B 1 = 100 {\displaystyle x_{B1}=100} , x B i = 200 {\displaystyle x_{Bi}=200} for i = 2 , … , 99 {\displaystyle i=2,\dots ,99} , and x B 100 = 300 {\displaystyle x_{B100}=300} . Here B has been constructed from A by moving one chunk of 1% probability from 198 to 100 and moving 49 probability chunks from 198 to 200, and then moving one probability chunk from 202 to 300 and moving 49 probability chunks from 202 to 200. This sequence of two mean-preserving spreads is itself a mean-preserving spread, despite the fact that 98% of the probability mass has moved to the mean (200). Let x A {\displaystyle x_{A}} and x B {\displaystyle x_{B}} be the random variables associated with gambles A and B. Then B is a mean-preserving spread of A if and only if x B = d ( x A + z ) {\displaystyle x_{B}{\overset {d}{=}}(x_{A}+z)} for some random variable z {\displaystyle z} having E ( z ∣ x A ) = 0 {\displaystyle E(z\mid x_{A})=0} for all values of x A {\displaystyle x_{A}} . Here = d {\displaystyle {\overset {d}{=}}} means " is equal in distribution to " (that is, "has the same distribution as"). Mean-preserving spreads can also be defined in terms of the cumulative distribution functions F A {\displaystyle F_{A}} and F B {\displaystyle F_{B}} of A and B. If A and B have equal means, B is a mean-preserving spread of A if and only if the area under F A {\displaystyle F_{A}} from minus infinity to x {\displaystyle x} is less than or equal to that under F B {\displaystyle F_{B}} from minus infinity to x {\displaystyle x} for all real numbers x {\displaystyle x} , with strict inequality at some x {\displaystyle x} . Both of these mathematical definitions replicate those of second-order stochastic dominance for the case of equal means. If B is a mean-preserving spread of A then A will be preferred by all expected utility maximizers having concave utility. The converse also holds: if A and B have equal means and A is preferred by all expected utility maximizers having concave utility, then B is a mean-preserving spread of A.
https://en.wikipedia.org/wiki/Mean-preserving_spread
In mathematics , the mean (topological) dimension of a topological dynamical system is a non-negative extended real number that is a measure of the complexity of the system. Mean dimension was first introduced in 1999 by Gromov . [ 1 ] Shortly after it was developed and studied systematically by Lindenstrauss and Weiss . [ 2 ] In particular they proved the following key fact: a system with finite topological entropy has zero mean dimension. For various topological dynamical systems with infinite topological entropy, the mean dimension can be calculated or at least bounded from below and above. This allows mean dimension to be used to distinguish between systems with infinite topological entropy. Mean dimension is also related to the problem of embedding topological  dynamical systems in shift spaces (over Euclidean cubes). A topological dynamical system consists of a compact Hausdorff topological space X {\displaystyle \textstyle X} and a continuous self-map T : X → X {\displaystyle \textstyle T:X\rightarrow X} . Let O {\displaystyle \textstyle {\mathcal {O}}} denote the collection of open finite covers of X {\displaystyle \textstyle X} . For α ∈ O {\displaystyle \textstyle \alpha \in {\mathcal {O}}} define its order by An open finite cover β {\displaystyle \textstyle \beta } refines α {\displaystyle \textstyle \alpha } , denoted β ≻ α {\displaystyle \textstyle \beta \succ \alpha } , if for every V ∈ β {\displaystyle \textstyle V\in \beta } , there is U ∈ α {\displaystyle \textstyle U\in \alpha } so that V ⊂ U {\displaystyle \textstyle V\subset U} . Let Note that in terms of this definition the Lebesgue covering dimension is defined by dim L e b ⁡ ( X ) = sup α ∈ O D ( α ) {\displaystyle \dim _{\mathrm {Leb} }(X)=\sup _{\alpha \in {\mathcal {O}}}D(\alpha )} . Let α , β {\displaystyle \textstyle \alpha ,\beta } be open finite covers of X {\displaystyle \textstyle X} . The join of α {\displaystyle \textstyle \alpha } and β {\displaystyle \textstyle \beta } is the open finite cover by all sets of the form A ∩ B {\displaystyle \textstyle A\cap B} where A ∈ α {\displaystyle \textstyle A\in \alpha } , B ∈ β {\displaystyle \textstyle B\in \beta } . Similarly one can define the join ⋁ i = 1 n α i {\displaystyle \textstyle \bigvee _{i=1}^{n}\alpha _{i}} of any finite collection of open covers of X {\displaystyle \textstyle X} . The mean dimension is the non-negative extended real number: where α n = ⋁ i = 0 n − 1 T − i α . {\displaystyle \textstyle \alpha ^{n}=\bigvee _{i=0}^{n-1}T^{-i}\alpha .} If the compact Hausdorff topological space X {\displaystyle \textstyle X} is metrizable and d {\displaystyle \textstyle d} is a compatible metric, an equivalent definition can be given. For ε > 0 {\displaystyle \textstyle \varepsilon >0} , let Widim ε ⁡ ( X , d ) {\displaystyle \textstyle \operatorname {Widim} _{\varepsilon }(X,d)} be the minimal non-negative integer n {\displaystyle \textstyle n} , such that there exists an open finite cover of X {\displaystyle \textstyle X} by sets of diameter less than ε {\displaystyle \textstyle \varepsilon } such that any n + 2 {\displaystyle \textstyle n+2} distinct sets from this cover have empty intersection. Note that in terms of this definition the Lebesgue covering dimension is defined by dim L e b ⁡ ( X ) = sup ε > 0 Widim ε ⁡ ( X , d ) {\displaystyle \textstyle \dim _{\mathrm {Leb} }(X)=\sup _{\varepsilon >0}\operatorname {Widim} _{\varepsilon }(X,d)} . Let The mean dimension is the non-negative extended real number: Let d ∈ N {\displaystyle \textstyle d\in \mathbb {N} } . Let X = ( [ 0 , 1 ] d ) Z {\displaystyle \textstyle X=([0,1]^{d})^{\mathbb {Z} }} and T : X → X {\displaystyle \textstyle T:X\rightarrow X} be the shift homeomorphism ( … , x − 2 , x − 1 , x 0 , x 1 , x 2 , … ) → ( … , x − 1 , x 0 , x 1 , x 2 , x 3 , … ) {\displaystyle \textstyle (\ldots ,x_{-2},x_{-1},\mathbf {x_{0}} ,x_{1},x_{2},\ldots )\rightarrow (\ldots ,x_{-1},x_{0},\mathbf {x_{1}} ,x_{2},x_{3},\ldots )} , then mdim ⁡ ( X , T ) = d {\displaystyle \textstyle \operatorname {mdim} (X,T)=d} . What is Mean Dimension?
https://en.wikipedia.org/wiki/Mean_dimension
In organizational management , mean down time ( MDT ) is the average time that a system is non-operational. This includes all downtime associated with repair , corrective and preventive maintenance , self-imposed downtime , and any logistics or administrative delays. The inclusion of delay times distinguishes mean down time from mean time to repair (MTTR), which includes only downtime specifically attributable to repairs. [ 1 ] Mean Down Time key factors: There are four main ways of reducing MDT: This business-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mean_down_time
In fluid dynamics , the fluid flow is often decomposed into a mean flow and deviations from the mean . The averaging can be done either in space or in time, or by ensemble averaging . Calculation of the mean flow may often be as simple as the mathematical mean : simply add up the given flow rates and then divide the final figure by the number of initial readings. For example, given two discharges ( Q ) of 3 m³/s and 5 m³/s, we can use these flow rates Q to calculate the mean flow rate Q mean . Which in this case is Q mean = 4 m³/s. This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mean_flow
In physics , mean free path is the average distance over which a moving particle (such as an atom , a molecule , or a photon ) travels before substantially changing its direction or energy (or, in a specific context, other properties), typically as a result of one or more successive collisions with other particles. Imagine a beam of particles being shot through a target, and consider an infinitesimally thin slab of the target (see the figure). [ 1 ] The atoms (or particles) that might stop a beam particle are shown in red. The magnitude of the mean free path depends on the characteristics of the system. Assuming that all the target particles are at rest but only the beam particle is moving, that gives an expression for the mean free path: where ℓ is the mean free path, n is the number of target particles per unit volume, and σ is the effective cross-sectional area for collision. The area of the slab is L 2 , and its volume is L 2 dx . The typical number of stopping atoms in the slab is the concentration n times the volume, i.e., n L 2 dx . The probability that a beam particle will be stopped in that slab is the net area of the stopping atoms divided by the total area of the slab: where σ is the area (or, more formally, the " scattering cross-section ") of one atom. The drop in beam intensity equals the incoming beam intensity multiplied by the probability of the particle being stopped within the slab: This is an ordinary differential equation : whose solution is known as Beer–Lambert law and has the form I = I 0 e − x / ℓ {\displaystyle I=I_{0}e^{-x/\ell }} , where x is the distance traveled by the beam through the target, and I 0 is the beam intensity before it entered the target; ℓ is called the mean free path because it equals the mean distance traveled by a beam particle before being stopped. To see this, note that the probability that a particle is absorbed between x and x + dx is given by Thus the expectation value (or average, or simply mean) of x is The fraction of particles that are not stopped ( attenuated ) by the slab is called transmission T = I / I 0 = e − x / ℓ {\displaystyle T=I/I_{0}=e^{-x/\ell }} , where x is equal to the thickness of the slab. In the kinetic theory of gases , the mean free path of a particle, such as a molecule , is the average distance the particle travels between collisions with other moving particles. The derivation above assumed the target particles to be at rest; therefore, in reality, the formula ℓ = ( n σ ) − 1 {\displaystyle \ell =(n\sigma )^{-1}} holds for a beam particle with a high speed v {\displaystyle v} relative to the velocities of an ensemble of identical particles with random locations. In that case, the motions of target particles are comparatively negligible, hence the relative velocity v r e l ≈ v {\displaystyle v_{\rm {rel}}\approx v} . If, on the other hand, the beam particle is part of an established equilibrium with identical particles, then the square of relative velocity is: ⟨ v r e l a t i v e 2 ⟩ = ⟨ ( v 1 − v 2 ) 2 ⟩ = ⟨ v 1 2 + v 2 2 − 2 v 1 ⋅ v 2 ⟩ . {\displaystyle \langle \mathbf {v} _{\rm {relative}}^{2}\rangle =\langle (\mathbf {v} _{1}-\mathbf {v} _{2})^{2}\rangle =\langle \mathbf {v} _{1}^{2}+\mathbf {v} _{2}^{2}-2\mathbf {v} _{1}\cdot \mathbf {v} _{2}\rangle .} In equilibrium, v 1 {\displaystyle \mathbf {v} _{1}} and v 2 {\displaystyle \mathbf {v} _{2}} are random and uncorrelated, therefore ⟨ v 1 ⋅ v 2 ⟩ = 0 {\displaystyle \langle \mathbf {v} _{1}\cdot \mathbf {v} _{2}\rangle =0} , and the relative speed is v r e l = ⟨ v r e l a t i v e 2 ⟩ = ⟨ v 1 2 + v 2 2 ⟩ = 2 v . {\displaystyle v_{\rm {rel}}={\sqrt {\langle \mathbf {v} _{\rm {relative}}^{2}\rangle }}={\sqrt {\langle \mathbf {v} _{1}^{2}+\mathbf {v} _{2}^{2}\rangle }}={\sqrt {2}}v.} This means that the number of collisions is 2 {\displaystyle {\sqrt {2}}} times the number with stationary targets. Therefore, the following relationship applies: [ 2 ] and using n = N / V = p / ( k B T ) {\displaystyle n=N/V=p/(k_{\text{B}}T)} ( ideal gas law ) and σ = π d 2 {\displaystyle \sigma =\pi d^{2}} (effective cross-sectional area for spherical particles with diameter d {\displaystyle d} ), it may be shown that the mean free path is [ 3 ] where k B is the Boltzmann constant , p {\displaystyle p} is the pressure of the gas and T {\displaystyle T} is the absolute temperature. In practice, the diameter of gas molecules is not well defined. In fact, the kinetic diameter of a molecule is defined in terms of the mean free path. Typically, gas molecules do not behave like hard spheres, but rather attract each other at larger distances and repel each other at shorter distances, as can be described with a Lennard-Jones potential . One way to deal with such "soft" molecules is to use the Lennard-Jones σ parameter as the diameter. Another way is to assume a hard-sphere gas that has the same viscosity as the actual gas being considered. This leads to a mean free path [ 4 ] where m {\displaystyle m} is the molecular mass , ρ = m p / ( k B T ) {\displaystyle \rho =mp/(k_{\text{B}}T)} is the density of ideal gas, and μ is the dynamic viscosity. This expression can be put into the following convenient form with R s p e c i f i c = k B / m {\displaystyle R_{\rm {specific}}=k_{\text{B}}/m} being the specific gas constant , equal to 287 J/(kg*K) for air. Viscosity μ is low, 18.5 μPa·s at (25 °C, 1 bar), and p-dependent. The following table lists some typical values for air at different pressures at room temperature. Note that different definitions of the molecular diameter, as well as different assumptions about the value of atmospheric pressure (100 vs 101.3 kPa) and room temperature (293.15 vs 296.15 K (20-23 °C) or even 300 K) can lead to slightly different values of the mean free path. In gamma-ray radiography the mean free path of a pencil beam of mono-energetic photons is the average distance a photon travels between collisions with atoms of the target material. It depends on the material and the energy of the photons: where μ is the linear attenuation coefficient , μ/ρ is the mass attenuation coefficient and ρ is the density of the material. The mass attenuation coefficient can be looked up or calculated for any material and energy combination using the National Institute of Standards and Technology (NIST) databases. [ 8 ] [ 9 ] In X-ray radiography the calculation of the mean free path is more complicated, because photons are not mono-energetic, but have some distribution of energies called a spectrum . As photons move through the target material, they are attenuated with probabilities depending on their energy, as a result their distribution changes in process called spectrum hardening. Because of spectrum hardening, the mean free path of the X-ray spectrum changes with distance. Sometimes one measures the thickness of a material in the number of mean free paths . Material with the thickness of one mean free path will attenuate to 37% (1/ e ) of photons. This concept is closely related to half-value layer (HVL): a material with a thickness of one HVL will attenuate 50% of photons. A standard x-ray image is a transmission image, an image with negative logarithm of its intensities is sometimes called a number of mean free paths image. In macroscopic charge transport, the mean free path of a charge carrier in a metal ℓ {\displaystyle \ell } is proportional to the electrical mobility μ {\displaystyle \mu } , a value directly related to electrical conductivity , that is: where q is the charge , τ {\displaystyle \tau } is the mean free time , m * is the effective mass , and v F is the Fermi velocity of the charge carrier. The Fermi velocity can easily be derived from the Fermi energy via the non-relativistic kinetic energy equation. In thin films , however, the film thickness can be smaller than the predicted mean free path, making surface scattering much more noticeable, effectively increasing the resistivity . Electron mobility through a medium with dimensions smaller than the mean free path of electrons occurs through ballistic conduction or ballistic transport. In such scenarios electrons alter their motion only in collisions with conductor walls. If one takes a suspension of non-light-absorbing particles of diameter d with a volume fraction Φ , the mean free path of the photons is: [ 10 ] where Q s is the scattering efficiency factor. Q s can be evaluated numerically for spherical particles using Mie theory . In an otherwise empty cavity, the mean free path of a single particle bouncing off the walls is: where V is the volume of the cavity, S is the total inside surface area of the cavity, and F is a constant related to the shape of the cavity. For most simple cavity shapes, F is approximately 4. [ 11 ] This relation is used in the derivation of the Sabine equation in acoustics, using a geometrical approximation of sound propagation. [ 12 ] In particle physics the concept of the mean free path is not commonly used, being replaced by the similar concept of attenuation length . In particular, for high-energy photons, which mostly interact by electron–positron pair production , the radiation length is used much like the mean free path in radiography. Independent-particle models in nuclear physics require the undisturbed orbiting of nucleons within the nucleus before they interact with other nucleons. [ 13 ] The effective mean free path of a nucleon in nuclear matter must be somewhat larger than the nuclear dimensions in order to allow the use of the independent particle model. This requirement seems to be in contradiction to the assumptions made in the theory ... We are facing here one of the fundamental problems of nuclear structure physics which has yet to be solved.
https://en.wikipedia.org/wiki/Mean_free_path
Molecules in a fluid constantly collide with each other. The mean free time for a molecule in a fluid is the average time between collisions. The mean free path of the molecule is the product of the average speed and the mean free time. [ 1 ] These concepts are used in the kinetic theory of gases to compute transport coefficients such as the viscosity . [ 2 ] In a gas the mean free path may be much larger than the average distance between molecules. In a liquid these two lengths may be very similar. Scattering is a random process. It is often modeled as a Poisson process , in which the probability of a collision in a small time interval d t {\displaystyle dt} is d t / τ {\displaystyle dt/\tau } . For a Poisson process like this, the average time since the last collision, the average time until the next collision and the average time between collisions are all equal to τ {\displaystyle \tau } . [ 1 ] This article about statistical mechanics is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mean_free_time
Mean inter-particle distance (or mean inter-particle separation) is the mean distance between microscopic particles (usually atoms or molecules ) in a macroscopic body. From the very general considerations, the mean inter-particle distance is proportional to the size of the per-particle volume 1 / n {\displaystyle 1/n} , i.e., where n = N / V {\displaystyle n=N/V} is the particle density . However, barring a few simple cases such as the ideal gas model, precise calculations of the proportionality factor are impossible analytically. Therefore, approximate expressions are often used. One such estimation is the Wigner–Seitz radius which corresponds to the radius of a sphere having per-particle volume 1 / n {\displaystyle 1/n} . Another popular definition is corresponding to the length of the edge of the cube with the per-particle volume 1 / n {\displaystyle 1/n} . The two definitions differ by a factor of approximately 1.61 {\displaystyle 1.61} , so one has to exercise care if an article fails to define the parameter exactly. On the other hand, it is often used in qualitative statements where such a numeric factor is either irrelevant or plays an insignificant role, e.g., We want to calculate probability distribution function of distance to the nearest neighbor (NN) particle. (The problem was first considered by Paul Hertz ; [ 1 ] for a modern derivation see, e.g.,. [ 2 ] ) Let us assume N {\displaystyle N} particles inside a sphere having volume V {\displaystyle V} , so that n = N / V {\displaystyle n=N/V} . Note that since the particles in the ideal gas are non-interacting, the probability of finding a particle at a certain distance from another particle is the same as the probability of finding a particle at the same distance from any other point; we shall use the center of the sphere. An NN particle at a distance r {\displaystyle r} means exactly one of the N {\displaystyle N} particles resides at that distance while the rest N − 1 {\displaystyle N-1} particles are at larger distances, i.e., they are somewhere outside the sphere with radius r {\displaystyle r} . The probability to find a particle at the distance from the origin between r {\displaystyle r} and r + d r {\displaystyle r+dr} is ( 4 π r 2 / V ) d r {\displaystyle (4\pi r^{2}/V)dr} , plus we have N {\displaystyle N} kinds of way to choose which particle, while the probability to find a particle outside that sphere is 1 − 4 π r 3 / 3 V {\displaystyle 1-4\pi r^{3}/3V} . The sought-for expression is then where we substituted Note that a {\displaystyle a} is the Wigner-Seitz radius . Finally, taking the N → ∞ {\displaystyle N\rightarrow \infty } limit and using lim x → ∞ ( 1 + 1 x ) x = e {\displaystyle \lim _{x\rightarrow \infty }\left(1+{\frac {1}{x}}\right)^{x}=e} , we obtain One can immediately check that The distribution peaks at or, using the t = x 3 {\displaystyle t=x^{3}} substitution, where Γ {\displaystyle \Gamma } is the gamma function . Thus, In particular,
https://en.wikipedia.org/wiki/Mean_inter-particle_distance
In calculus , and especially multivariable calculus , the mean of a function is loosely defined as the ” average " value of the function over its domain . In a one-dimensional domain , the mean of a function f ( x ) over the interval ( a , b ) is defined by: [ 1 ] Recall that a defining property of the average value y ¯ {\displaystyle {\bar {y}}} of finitely many numbers y 1 , y 2 , … , y n {\displaystyle y_{1},y_{2},\dots ,y_{n}} is that n y ¯ = y 1 + y 2 + ⋯ + y n {\displaystyle n{\bar {y}}=y_{1}+y_{2}+\cdots +y_{n}} . In other words, y ¯ {\displaystyle {\bar {y}}} is the constant value which when added n {\displaystyle n} times equals the result of adding the n {\displaystyle n} terms y 1 , … , y n {\displaystyle y_{1},\dots ,y_{n}} . By analogy, a defining property of the average value f ¯ {\displaystyle {\bar {f}}} of a function over the interval [ a , b ] {\displaystyle [a,b]} is that In other words, f ¯ {\displaystyle {\bar {f}}} is the constant value which when integrated over [ a , b ] {\displaystyle [a,b]} equals the result of integrating f ( x ) {\displaystyle f(x)} over [ a , b ] {\displaystyle [a,b]} . But the integral of a constant f ¯ {\displaystyle {\bar {f}}} is just See also the first mean value theorem for integration , which guarantees that if f {\displaystyle f} is continuous then there exists a point c ∈ ( a , b ) {\displaystyle c\in (a,b)} such that The point f ( c ) {\displaystyle f(c)} is called the mean value of f ( x ) {\displaystyle f(x)} on [ a , b ] {\displaystyle [a,b]} . So we write f ¯ = f ( c ) {\displaystyle {\bar {f}}=f(c)} and rearrange the preceding equation to get the above definition. In several variables , the mean over a relatively compact domain U in a Euclidean space is defined by where Vol ( U ) {\displaystyle {\hbox{Vol}}(U)} and d V {\displaystyle dV} are, respectively, the domain volume and volume element (or generalizations thereof, e.g., volume form ). The above generalizes the arithmetic mean to functions. On the other hand, it is also possible to generalize the geometric mean to functions by: More generally, in measure theory and probability theory , either sort of mean plays an important role. In this context, Jensen's inequality places sharp estimates on the relationship between these two different notions of the mean of a function. There is also a harmonic average of functions and a quadratic average (or root mean square ) of functions.
https://en.wikipedia.org/wiki/Mean_of_a_function
Mean opinion score ( MOS ) is a measure used in the domain of Quality of Experience and telecommunications engineering , representing overall quality of a stimulus or system. It is the arithmetic mean over all individual "values on a predefined scale that a subject assigns to his opinion of the performance of a system quality". [ 1 ] Such ratings are usually gathered in a subjective quality evaluation test , but they can also be algorithmically estimated. MOS is a commonly used measure for video, audio, and audiovisual quality evaluation, but not restricted to those modalities. ITU-T has defined several ways of referring to a MOS in Recommendation ITU-T P.800.1 , depending on whether the score was obtained from audiovisual, conversational, listening, talking, or video quality tests. The MOS is expressed as a single rational number, typically in the range 1–5, where 1 is lowest perceived quality, and 5 is the highest perceived quality. Other MOS ranges are also possible, depending on the rating scale that has been used in the underlying test. The Absolute Category Rating scale is very commonly used, which maps ratings between Bad and Excellent to numbers between 1 and 5, as seen in below table. Other standardized quality rating scales exist in ITU-T Recommendations (such as ITU-T P.800 or ITU-T P.910 ). For example, one could use a continuous scale ranging between 1–100. Which scale is used depends on the purpose of the test. In certain contexts there are no statistically significant differences between ratings for the same stimuli when they are obtained using different scales. [ 2 ] The MOS is calculated as the arithmetic mean over single ratings performed by human subjects for a given stimulus in a subjective quality evaluation test . Thus: Where R {\displaystyle R} are the individual ratings for a given stimulus by N {\displaystyle N} subjects. The MOS is subject to certain mathematical properties and biases. In general, there is an ongoing debate on the usefulness of the MOS to quantify Quality of Experience in a single scalar value. [ 3 ] When the MOS is acquired using a categorical rating scales, it is based on – similar to Likert scales – an ordinal scale . In this case, the ranking of the scale items is known, but their interval is not. Therefore, it is mathematically incorrect to calculate a mean over individual ratings in order to obtain the central tendency; the median should be used instead. [ 4 ] However, in practice and in the definition of MOS, it is considered acceptable to calculate the arithmetic mean. It has been shown that for categorical rating scales (such as ACR), the individual items are not perceived equidistant by subjects. For example, there may be a larger "gap" between Good and Fair than there is between Good and Excellent . The perceived distance may also depend on the language into which the scale is translated. [ 5 ] However, there exist studies that could not prove a significant impact of scale translation on the obtained results. [ 6 ] Several other biases are present in the way MOS ratings are typically acquired. [ 7 ] In addition to the above-mentioned issues with scales that are perceived non-linearly, there is a so-called "range-equalization bias": subjects, over the course of a subjective experiment, tend to give scores that span the entire rating scale. This makes it impossible to compare two different subjective tests if the range of presented quality differs. In other words, the MOS is never an absolute measure of quality, but only relative to the test in which it has been acquired. For the above reasons – and due to several other contextual factors influencing the perceived quality in a subjective test – a MOS value should only be reported if the context in which the values have been collected in is known and reported as well. MOS values gathered from different contexts and test designs therefore should not be directly compared. Recommendation ITU-T P.800.2 prescribes how MOS values should be reported. Specifically, P.800.2 says: it is not meaningful to directly compare MOS values produced from separate experiments, unless those experiments were explicitly designed to be compared, and even then the data should be statistically analysed to ensure that such a comparison is valid. MOS historically originates from subjective measurements where listeners would sit in a "quiet room" and score a telephone call quality as they perceived it. This kind of test methodology had been in use in the telephony industry for decades and was standardized in Recommendation ITU-T P.800 . It specifies that "the talker should be seated in a quiet room with volume between 30 and 120 m³ and a reverberation time less than 500 ms (preferably in the range 200–300 ms). The room noise level must be below 30 dBA with no dominant peaks in the spectrum." Requirements for other modalities were similarly specified in later ITU-T Recommendations. Obtaining MOS ratings may be time-consuming and expensive as it requires the recruitment of human assessors. For various use cases such as codec development or service quality monitoring purposes – where quality should be estimated repeatedly and automatically – MOS scores can also be predicted by objective quality models , which typically have been developed and trained using human MOS ratings. A question that arises from using such models is whether the MOS differences produced are noticeable to the users. For example, when rating images on a five point MOS scale, an image with a MOS equal to 5 is expected to be noticeably better in quality than one with a MOS equal to 1. Contrary to that, it is not evident whether an image with a MOS equal to 3.8 is noticeably better in quality than one with a MOS equal to 3.6. Research conducted on determining the smallest MOS difference that is perceptible to users for digital photographs showed that a MOS difference of approximately 0.46 is required in order for 75% of the users to be able to detect the higher quality image. [ 8 ] Nevertheless, image quality expectation, and hence MOS, changes over time with the change of user expectations. As a result, minimum noticeable MOS differences determined using analytical methods such as in [ 8 ] may change over time.
https://en.wikipedia.org/wiki/Mean_opinion_score
In game theory , a mean payoff game is a zero-sum game played on the vertices of a weighted directed graph . The game is played as follows: at the start of the game, a token is placed on one of the vertices of the graph. Each vertex is assigned to either the Maximizer of the Minimizer. The player that controls the current vertex the token is on, may choose one outgoing edge along which the token moves next. In doing so, the Minimizer pays the maximizer the number that is on the edge. Then, again, the player controlling the next vertex the token gets can choose where it goes, and this continues indefinitely. The objective for the Maximizer is to maximize their long term average payoff, and the Minimizer has the opposite objective. A mean payoff game consists of a graph G = ( V M a x ∪ V M i n , E ) {\displaystyle G=(V_{Max}\cup V_{Min},E)} , and a function w : E → R {\displaystyle w:E\to \mathbb {R} } where V = V M a x ∪ V M i n {\displaystyle V=V_{Max}\cup V_{Min}} is the set of vertices, which are partitioned between the players, and where w ( e ) {\displaystyle w(e)} is the weight of an edge. Often, the graph is assumed to be sinkless , which means that every vertex has at least one outgoing edge. A play is a possible outcome of the game, which is an inifinite walk on the graph, we could write this as a sequence of edges: π = e 1 , e 2 , e 3 , … {\displaystyle \pi =e_{1},e_{2},e_{3},\ldots } where the head of e i {\displaystyle e_{i}} equals the tail of e i + 1 {\displaystyle e_{i+1}} . The objective value of the game can then be written as follows: O ( π ) = lim inf t → ∞ 1 t ∑ i = 1 t w ( e i ) {\displaystyle O(\pi )=\liminf _{t\to \infty }{\frac {1}{t}}\sum _{i=1}^{t}w(e_{i})} A strategy for the Maximizer is a function σ : F W M a x → E {\displaystyle \sigma :FW_{Max}\to E} , where F W v {\displaystyle FW_{v}} is the set of finite walks that start at the initial vertex and end at some vertex v ∈ V M a x {\displaystyle v\in V_{Max}} , which returns an outgoing edge of the end vertex v {\displaystyle v} . A strategy τ {\displaystyle \tau } for the Minimizer can be defined analogously. If both players fix a strategy, say they pick strategies σ {\displaystyle \sigma } and τ {\displaystyle \tau } , then the outcome of the game is fixed, and the resulting play is the path π σ τ {\displaystyle \pi _{\sigma \tau }} . One of the fundamental results for mean payoff games is that they are positionally determined. [ 1 ] This means in our case that the game has a unique value , and that each player has a strategy that can attain the value, and that strategy is positional , e.g. it only depends on the current vertex the token is on. In formulas, the following equation holds for the value V ( G , w ) {\displaystyle V(G,w)} : V ( G , w ) = max σ ∈ (positional Max strategies) inf τ ∈ (Min strategies) O ( π σ τ ) = min τ ∈ (positional Min strategies) sup σ ∈ (Max strategies) O ( π σ τ ) {\displaystyle V(G,w)=\max _{\sigma \in {\text{(positional Max strategies)}}}\inf _{\tau \in {\text{(Min strategies)}}}O(\pi _{\sigma \tau })=\min _{\tau \in {\text{(positional Min strategies)}}}\sup _{\sigma \in {\text{(Max strategies)}}}O(\pi _{\sigma \tau })} Solving a mean payoff game can mean several things, although in practice finding one often also yields the other: It is a major open problem in computer science whether there exists a polynomial time algorithm for solving any of the above problems. These problems are one of the few to be contained in both the classes NP and coNP [ 2 ] but not known to be in P. Currently, the fastest algorithm is a randomized strategy improvement algorithm, which runs in time O ( 2 n log ⁡ ( n ) ) {\displaystyle O(2^{\sqrt {n\log(n)}})} , [ 3 ] where n {\displaystyle n} is the number of Maximizer vertices. The best deterministic algorithms run in time m 2 n / 2 {\displaystyle m2^{n/2}} , [ 4 ] where now m {\displaystyle m} is the number of edges and n {\displaystyle n} the total number of vertices. Three of the most well-known algorithms for solving mean payoff games are the following (each of which has their own slight variants): The problem of solving parity games can be polynomial-time reduced to solving mean payoff games. [ 7 ] Solving mean payoff games can be shown to be polynomial-time equivalent to many core problems concerning tropical linear programming. [ 8 ] Another closely related game to the mean payoff game is the energy game, in which the Maximizer tries to maximize the smallest cumulative sum within the play instead of the long-term average.
https://en.wikipedia.org/wiki/Mean_payoff_game
The mean radius in astronomy is a measure for the size of planets and small Solar System bodies . Alternatively, the closely related mean diameter ( D {\displaystyle D} ), which is twice the mean radius, is also used. For a non-spherical object, the mean radius (denoted R {\displaystyle R} or r {\displaystyle r} ) is defined as the radius of the sphere that would enclose the same volume as the object. [ 1 ] In the case of a sphere, the mean radius is equal to the radius. For any irregularly shaped rigid body, there is a unique ellipsoid with the same volume and moments of inertia . [ 2 ] In astronomy, the dimensions of an object are defined as the principal axes of that special ellipsoid. [ 3 ] The dimensions of a minor planet can be uni-, bi- or tri-axial, depending on what kind of ellipsoid is used to model it. Given the dimensions of an irregularly shaped object, one can calculate its mean radius: An oblate spheroid , bi-axial, or rotational ellipsoid with axes a {\displaystyle a} and c {\displaystyle c} has a mean radius of R = ( a 2 ⋅ c ) 1 / 3 {\displaystyle R=(a^{2}\cdot c)^{1/3}} . [ 4 ] A tri-axial ellipsoid with axes a {\displaystyle a} , b {\displaystyle b} and c {\displaystyle c} has mean radius R = ( a ⋅ b ⋅ c ) 1 / 3 {\displaystyle R=(a\cdot b\cdot c)^{1/3}} . [ 1 ] The formula for a rotational ellipsoid is the special case where a = b {\displaystyle a=b} . For a sphere, which is uni-axial ( a = b = c {\displaystyle a=b=c} ), this simplifies to R = a {\displaystyle R=a} . Planets and dwarf planets are nearly spherical if they are not rotating. A rotating object that is massive enough to be in hydrostatic equilibrium will be close in shape to an ellipsoid, with the details depending on the rate of the rotation. At moderate rates, it will assume the form of either a bi-axial ( Maclaurin ) or tri-axial ( Jacobi ) ellipsoid. At faster rotations, non-ellipsoidal shapes can be expected, but these are not stable. [ 5 ]
https://en.wikipedia.org/wiki/Mean_radius_(astronomy)
The mean sojourn time (or sometimes mean waiting time ) for an object in a dynamical system is the amount of time an object is expected to spend in a system before leaving the system permanently. This concept is widely used in various fields, including physics, chemistry, and stochastic processes, to study the behavior of systems over time. Imagine someone is standing in line to buy a ticket at the counter. After a minute, by observing the number of customers behind them, this person can estimate the rate at which customers are entering the system (in this case, the waiting line) per unit time (one minute). By dividing the number of customers ahead by this "flow" of customers, one can estimate how much longer the wait will be to reach the counter. Formally, consider the waiting line as a system S into which there is a flow of particles (customers) and where the process of “buying a ticket” means that the particle leaves the system. This waiting time is commonly referred to as transit time. Applying Little's theorem once, the expected steady state number of particles in S equals the flow of particles into S times the mean transit time. Similar theorems have been discovered in other fields, and in physiology it was earlier known as one of the Stewart-Hamilton equations (which is used to estimate the blood volume of organs). Consider a system S in the form of a closed domain of finite volume in the Euclidean space . Further, consider the situation where there is a stream of ”equivalent” particles into S (number of particles per time unit) where each particle retains its identity while being in S and eventually – after a finite time – leaves the system irreversibly (i.e., for these particles the system is "open"). The figure above depicts the thought motion history of a single such particle, which thus moves in and out of subsystem s three times, each of which results in a transit time, namely the time spent in the subsystem between entrance and exit. The sum of these transit times is the sojourn time of s for that particular particle. If the motions of the particles are looked upon as realizations of one and the same stochastic process , it is meaningful to speak of the mean value of this sojourn time. That is, the mean sojourn time of a subsystem is the total time a particle is expected to spend in the subsystem s before leaving S for good. To see a practical significance of this quantity, we must understand that as a law of physics if the stream of particles into S is constant and all other relevant factors are kept constant, S will eventually reach steady state (i.e., the number and distribution of particles is constant everywhere in S ). It can then be demonstrated that the steady state number of particles in the subsystem s equals the stream of particles into the system S times the mean sojourn time of the subsystem. This is thus a more general form of what above was referred to as Little's theorem, and it might be called the mass-time equivalence : This has also been called the occupancy principle (where mean sojourn time is then referred to as occupancy). This mass-time equivalence has been applied in medicine for the study of metabolism of individual organs. This is a generalization of what in queuing theory is sometimes referred to as Little's theorem that applies only to the whole system S (not to an arbitrary subsystem as in the mass-time equivalence); the mean sojourn time in the Little's theorem can be interpreted as mean transit time. As likely evident from the discussion of the figure above, there is a fundamental difference between the meaning of the two quantities of sojourn time and transit time: the generality of the mass-time equivalence is very much due to the special meaning of the notion of sojourn time. When the whole system is considered (as in Little's law) is it true that sojourn time always equals transit time. Examples of Applications: 1) Queuing Theory : In queuing systems, it corresponds to the average time a customer or job spends in the system or a specific queue. 2) Physics : Used to describe trapping times in potential wells or energy barriers in molecular dynamics. 3) Markov Chains : Describes the time a system spends in a transient state before transitioning.
https://en.wikipedia.org/wiki/Mean_sojourn_time
The mean speed theorem , also known as the Merton rule of uniform acceleration , [ 1 ] was discovered in the 14th century by the Oxford Calculators of Merton College , and was proved by Nicole Oresme . It states that a uniformly accelerated body (starting from rest, i.e. zero initial velocity) travels the same distance as a body with uniform speed whose speed is half the final velocity of the accelerated body. [ 2 ] Oresme provided a geometrical verification for the generalized Merton rule, which we would express today as s = 1 2 ( v 0 + v f ) t {\displaystyle s={\frac {1}{2}}(v_{0}+v_{\rm {f}})t} (i.e., distance traveled is equal to one half of the sum of the initial v 0 {\displaystyle v_{0}} and final v f {\displaystyle v_{\rm {f}}} velocities, multiplied by the elapsed time t {\displaystyle t} ), by finding the area of a trapezoid . [ 3 ] Clay tablets used in Babylonian astronomy (350–50 BC) present trapezoid procedures for computing Jupiter's position and motion . [ 4 ] The medieval scientists demonstrated this theorem—the foundation of " the law of falling bodies "—long before Galileo , who is generally credited with it. Oresme's proof is also the first known example of the modelization of a physical problem as a mathematical function with a graphical representation, as well as of an early form of integration . The mathematical physicist and historian of science Clifford Truesdell , wrote: [ 5 ] The now published sources prove to us, beyond contention, that the main kinematical properties of uniformly accelerated motions , still attributed to Galileo by the physics texts, were discovered and proved by scholars of Merton college.... In principle, the qualities of Greek physics were replaced, at least for motions, by the numerical quantities that have ruled Western science ever since. The work was quickly diffused into France , Italy , and other parts of Europe . Almost immediately, Giovanni di Casale and Nicole Oresme found how to represent the results by geometrical graphs , introducing the connection between geometry and the physical world that became a second characteristic habit of Western thought ... The theorem is a special case of the more general kinematics equations for uniform acceleration.
https://en.wikipedia.org/wiki/Mean_speed_theorem
In medicine , the mean systemic pressure (MSP) or mean systemic filling pressure (MSFP) is defined as the mean pressure that exists in the circulatory system when there is no blood motion. A similar term, mean circulatory filling pressure , (MCFP) is defined as the mean pressure that exists in the combined circulatory system & pulmonary system when there is no blood motion. The value of MSP in animal experimental models is approximately 7 mm Hg. It is an indicator of how full the circulatory system is (i.e. the volume of blood in the system compared to the capacity of the system), and is influenced by the volume of circulating blood and the smooth muscle tone in the walls of the venous system (which determines the capacity of the system). [ 1 ] [ 2 ] MSP is measured in two ways experimentally, and as a result has two alternative naming conventions. MSFP is measured after clamping the aortic root and the great veins at point of entry to right atrium. [ 3 ] On the other hand, MCFP is measured experimentally by briefly inducing cardiac arrest or naturally during cardiac arrest once the blood redistributes. It may also be estimated in vivo using a series of inspiratory holds when a patient is on a mechanical ventilator. [ 4 ] It can be used to demonstrate effects of drugs on the venous tone while the circulating blood volume remains constant, [ 5 ] or to measure haemodynamic changes during haemorrhage . [ 6 ] Mean systemic pressure increases if there is an increase in blood volume or if there is a decrease in venous compliance (where blood is shifted from the veins to the arteries). An increase in mean systemic pressure is reflected in a shift of the vascular function curve to the right. Mean systemic pressure is decreased by a decrease in blood volume or by an increase in venous compliance (where blood is shifted from the arteries to the veins). A decrease in mean systemic pressure is reflected in a shift of the vascular function curve to the left. Mean systemic pressure is defined by the stressed volume in the cardiovascular system and the overall systemic capacitance: Mean systemic pressure is involved in the following calculations: This cardiovascular system article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mean_systemic_pressure
In a system the mean time between outages ( MTBO ) is the mean time between equipment failures that result in loss of system continuity or unacceptable degradation . The MTBO is calculated by the equation, M T B O = M T B F 1 − F F A S {\displaystyle MTBO={\frac {MTBF}{1-FFAS}}} where MTBF is the nonredundant mean time between failures and FFAS is the fraction of failures for which the failed equipment is automatically bypassed. This article incorporates public domain material from Federal Standard 1037C . General Services Administration . Archived from the original on 2022-01-22. (in support of MIL-STD-188 ). This technology-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mean_time_between_outages
Mean time (to) first failure (MTFF , sometimes MTTFF) is a concept in reliability engineering , which describes time to failure for non-repairable components like an integrated circuit soldered on a circuit board. [ 1 ] For repairable components like a replaceable light bulb the concept of mean time between failures is used to describe the failure rate . MTFF and MTTF (mean time to failure) have identical meanings. The key is that this is a non-repairable and non-recoverable failure. For example, the failure of a TV typically isn't measured by this criterion because the TV can be repaired. However, if this failure was due to a burned out integrated circuit, that circuit itself can't be repaired and must be replaced. The failure of that circuit is measured by mean time to failure. It's generally used to predict the first failure after manufacturing. [ 2 ] [ 3 ] This engineering-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mean_time_to_first_failure
Mean time to recovery ( MTTR ) [ 1 ] [ 2 ] [ 3 ] is the average time that a device will take to recover from any failure. Examples of such devices range from self-resetting fuses (where the MTTR would be very short, probably seconds), to whole systems which have to be repaired or replaced. The MTTR would usually be part of a maintenance contract, where the user would pay more for a system MTTR of which was 24 hours, than for one of, say, 7 days. This does not mean the supplier is guaranteeing to have the system up and running again within 24 hours (or 7 days) of being notified of the failure. It does mean the average repair time will tend towards 24 hours (or 7 days). A more useful maintenance contract measure is the maximum time to recovery which can be easily measured and the supplier held accountably. Note that some suppliers will interpret MTTR to mean 'mean time to respond' and others will take it to mean 'mean time to replace/repair/recover/resolve'. The former indicates that the supplier will acknowledge a problem and initiate mitigation within a certain timeframe. Some systems may have an MTTR of zero, which means that they have redundant components which can take over the instant the primary one fails, see RAID for example. However, the failed device involved in this redundant configuration still needs to be returned to service and hence the device itself has a non-zero MTTR even if the system as a whole (through redundancy) has an MTTR of zero. But, as long as service is maintained, this is a minor issue. This business-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mean_time_to_recovery
In mathematics , the mean value theorem (or Lagrange's mean value theorem ) states, roughly, that for a given planar arc between two endpoints, there is at least one point at which the tangent to the arc is parallel to the secant through its endpoints. It is one of the most important results in real analysis . This theorem is used to prove statements about a function on an interval starting from local hypotheses about derivatives at points of the interval. A special case of this theorem for inverse interpolation of the sine was first described by Parameshvara (1380–1460), from the Kerala School of Astronomy and Mathematics in India , in his commentaries on Govindasvāmi and Bhāskara II . [ 1 ] A restricted form of the theorem was proved by Michel Rolle in 1691; the result was what is now known as Rolle's theorem , and was proved only for polynomials, without the techniques of calculus. The mean value theorem in its modern form was stated and proved by Augustin Louis Cauchy in 1823. [ 2 ] Many variations of this theorem have been proved since then. [ 3 ] [ 4 ] Let f : [ a , b ] → R {\displaystyle f:[a,b]\to \mathbb {R} } be a continuous function on the closed interval [ a , b ] {\displaystyle [a,b]} , and differentiable on the open interval ( a , b ) {\displaystyle (a,b)} , where a < b {\displaystyle a<b} . Then there exists some c {\displaystyle c} in ( a , b ) {\displaystyle (a,b)} such that: [ 5 ] The mean value theorem is a generalization of Rolle's theorem , which assumes f ( a ) = f ( b ) {\displaystyle f(a)=f(b)} , so that the right-hand side above is zero. The mean value theorem is still valid in a slightly more general setting. One only needs to assume that f : [ a , b ] → R {\displaystyle f:[a,b]\to \mathbb {R} } is continuous on [ a , b ] {\displaystyle [a,b]} , and that for every x {\displaystyle x} in ( a , b ) {\displaystyle (a,b)} the limit exists as a finite number or equals ∞ {\displaystyle \infty } or − ∞ {\displaystyle -\infty } . If finite, that limit equals f ′ ( x ) {\displaystyle f'(x)} . An example where this version of the theorem applies is given by the real-valued cube root function mapping x ↦ x 1 / 3 {\displaystyle x\mapsto x^{1/3}} , whose derivative tends to infinity at the origin. The expression f ( b ) − f ( a ) b − a {\textstyle {\frac {f(b)-f(a)}{b-a}}} gives the slope of the line joining the points ( a , f ( a ) ) {\displaystyle (a,f(a))} and ( b , f ( b ) ) {\displaystyle (b,f(b))} , which is a chord of the graph of f {\displaystyle f} , while f ′ ( x ) {\displaystyle f'(x)} gives the slope of the tangent to the curve at the point ( x , f ( x ) ) {\displaystyle (x,f(x))} . Thus the mean value theorem says that given any chord of a smooth curve, we can find a point on the curve lying between the end-points of the chord such that the tangent of the curve at that point is parallel to the chord. The following proof illustrates this idea. Define g ( x ) = f ( x ) − r x {\displaystyle g(x)=f(x)-rx} , where r {\displaystyle r} is a constant. Since f {\displaystyle f} is continuous on [ a , b ] {\displaystyle [a,b]} and differentiable on ( a , b ) {\displaystyle (a,b)} , the same is true for g {\displaystyle g} . We now want to choose r {\displaystyle r} so that g {\displaystyle g} satisfies the conditions of Rolle's theorem . Namely By Rolle's theorem , since g {\displaystyle g} is differentiable and g ( a ) = g ( b ) {\displaystyle g(a)=g(b)} , there is some c {\displaystyle c} in ( a , b ) {\displaystyle (a,b)} for which g ′ ( c ) = 0 {\displaystyle g'(c)=0} , and it follows from the equality g ( x ) = f ( x ) − r x {\displaystyle g(x)=f(x)-rx} that, Theorem 1: Assume that f {\displaystyle f} is a continuous, real-valued function, defined on an arbitrary interval I {\displaystyle I} of the real line. If the derivative of f {\displaystyle f} at every interior point of the interval I {\displaystyle I} exists and is zero, then f {\displaystyle f} is constant on I {\displaystyle I} . Proof: Assume the derivative of f {\displaystyle f} at every interior point of the interval I {\displaystyle I} exists and is zero. Let ( a , b ) {\displaystyle (a,b)} be an arbitrary open interval in I {\displaystyle I} . By the mean value theorem, there exists a point c {\displaystyle c} in ( a , b ) {\displaystyle (a,b)} such that This implies that f ( a ) = f ( b ) {\displaystyle f(a)=f(b)} . Thus, f {\displaystyle f} is constant on the interior of I {\displaystyle I} and thus is constant on I {\displaystyle I} by continuity. (See below for a multivariable version of this result.) Remarks: Theorem 2: If f ′ ( x ) = g ′ ( x ) {\displaystyle f'(x)=g'(x)} for all x {\displaystyle x} in an interval ( a , b ) {\displaystyle (a,b)} of the domain of these functions, then f − g {\displaystyle f-g} is constant, i.e. f = g + c {\displaystyle f=g+c} where c {\displaystyle c} is a constant on ( a , b ) {\displaystyle (a,b)} . Proof: Let F ( x ) = f ( x ) − g ( x ) {\displaystyle F(x)=f(x)-g(x)} , then F ′ ( x ) = f ′ ( x ) − g ′ ( x ) = 0 {\displaystyle F'(x)=f'(x)-g'(x)=0} on the interval ( a , b ) {\displaystyle (a,b)} , so the above theorem 1 tells that F ( x ) = f ( x ) − g ( x ) {\displaystyle F(x)=f(x)-g(x)} is a constant c {\displaystyle c} or f = g + c {\displaystyle f=g+c} . Theorem 3: If F {\displaystyle F} is an antiderivative of f {\displaystyle f} on an interval I {\displaystyle I} , then the most general antiderivative of f {\displaystyle f} on I {\displaystyle I} is F ( x ) + c {\displaystyle F(x)+c} where c {\displaystyle c} is a constant. Proof: It directly follows from the theorem 2 above. Cauchy's mean value theorem , also known as the extended mean value theorem , is a generalization of the mean value theorem. [ 6 ] [ 7 ] It states: if the functions f {\displaystyle f} and g {\displaystyle g} are both continuous on the closed interval [ a , b ] {\displaystyle [a,b]} and differentiable on the open interval ( a , b ) {\displaystyle (a,b)} , then there exists some c ∈ ( a , b ) {\displaystyle c\in (a,b)} , such that Of course, if g ( a ) ≠ g ( b ) {\displaystyle g(a)\neq g(b)} and g ′ ( c ) ≠ 0 {\displaystyle g'(c)\neq 0} , this is equivalent to: Geometrically, this means that there is some tangent to the graph of the curve [ 8 ] which is parallel to the line defined by the points ( f ( a ) , g ( a ) ) {\displaystyle (f(a),g(a))} and ( f ( b ) , g ( b ) ) {\displaystyle (f(b),g(b))} . However, Cauchy's theorem does not claim the existence of such a tangent in all cases where ( f ( a ) , g ( a ) ) {\displaystyle (f(a),g(a))} and ( f ( b ) , g ( b ) ) {\displaystyle (f(b),g(b))} are distinct points, since it might be satisfied only for some value c {\displaystyle c} with f ′ ( c ) = g ′ ( c ) = 0 {\displaystyle f'(c)=g'(c)=0} , in other words a value for which the mentioned curve is stationary ; in such points no tangent to the curve is likely to be defined at all. An example of this situation is the curve given by which on the interval [ − 1 , 1 ] {\displaystyle [-1,1]} goes from the point ( − 1 , 0 ) {\displaystyle (-1,0)} to ( 1 , 0 ) {\displaystyle (1,0)} , yet never has a horizontal tangent; however it has a stationary point (in fact a cusp ) at t = 0 {\displaystyle t=0} . Cauchy's mean value theorem can be used to prove L'Hôpital's rule . The mean value theorem is the special case of Cauchy's mean value theorem when g ( t ) = t {\displaystyle g(t)=t} . The proof of Cauchy's mean value theorem is based on the same idea as the proof of the mean value theorem. Define h ( x ) = ( g ( b ) − g ( a ) ) f ( x ) − ( f ( b ) − f ( a ) ) g ( x ) {\displaystyle h(x)=(g(b)-g(a))f(x)-(f(b)-f(a))g(x)} , then we easily see h ( a ) = h ( b ) = f ( a ) g ( b ) − f ( b ) g ( a ) {\displaystyle h(a)=h(b)=f(a)g(b)-f(b)g(a)} . Since f {\displaystyle f} and g {\displaystyle g} are continuous on [ a , b ] {\displaystyle [a,b]} and differentiable on ( a , b ) {\displaystyle (a,b)} , the same is true for h {\displaystyle h} . All in all, h {\displaystyle h} satisfies the conditions of Rolle's theorem . Consequently, there is some c {\displaystyle c} in ( a , b ) {\displaystyle (a,b)} for which h ′ ( c ) = 0 {\displaystyle h'(c)=0} . Now using the definition of h {\displaystyle h} we have: The result easily follows. The mean value theorem generalizes to real functions of multiple variables. The trick is to use parametrization to create a real function of one variable, and then apply the one-variable theorem. Let G {\displaystyle G} be an open subset of R n {\displaystyle \mathbb {R} ^{n}} , and let f : G → R {\displaystyle f:G\to \mathbb {R} } be a differentiable function. Fix points x , y ∈ G {\displaystyle x,y\in G} such that the line segment between x , y {\displaystyle x,y} lies in G {\displaystyle G} , and define g ( t ) = f ( ( 1 − t ) x + t y ) {\displaystyle g(t)=f{\big (}(1-t)x+ty{\big )}} . Since g {\displaystyle g} is a differentiable function in one variable, the mean value theorem gives: for some c {\displaystyle c} between 0 and 1. But since g ( 1 ) = f ( y ) {\displaystyle g(1)=f(y)} and g ( 0 ) = f ( x ) {\displaystyle g(0)=f(x)} , computing g ′ ( c ) {\displaystyle g'(c)} explicitly we have: where ∇ {\displaystyle \nabla } denotes a gradient and ⋅ {\displaystyle \cdot } a dot product . This is an exact analog of the theorem in one variable (in the case n = 1 {\displaystyle n=1} this is the theorem in one variable). By the Cauchy–Schwarz inequality , the equation gives the estimate: In particular, when G {\displaystyle G} is convex and the partial derivatives of f {\displaystyle f} are bounded, f {\displaystyle f} is Lipschitz continuous (and therefore uniformly continuous ). As an application of the above, we prove that f {\displaystyle f} is constant if the open subset G {\displaystyle G} is connected and every partial derivative of f {\displaystyle f} is 0. Pick some point x 0 ∈ G {\displaystyle x_{0}\in G} , and let g ( x ) = f ( x ) − f ( x 0 ) {\displaystyle g(x)=f(x)-f(x_{0})} . We want to show g ( x ) = 0 {\displaystyle g(x)=0} for every x ∈ G {\displaystyle x\in G} . For that, let E = { x ∈ G : g ( x ) = 0 } {\displaystyle E=\{x\in G:g(x)=0\}} . Then E {\displaystyle E} is closed in G {\displaystyle G} and nonempty. It is open too: for every x ∈ E {\displaystyle x\in E} , for every y {\displaystyle y} in open ball centered at x {\displaystyle x} and contained in G {\displaystyle G} . Since G {\displaystyle G} is connected, we conclude E = G {\displaystyle E=G} . The above arguments are made in a coordinate-free manner; hence, they generalize to the case when G {\displaystyle G} is a subset of a Banach space. There is no exact analog of the mean value theorem for vector-valued functions (see below). However, there is an inequality which can be applied to many of the same situations to which the mean value theorem is applicable in the one dimensional case: [ 9 ] Theorem — For a continuous vector-valued function f : [ a , b ] → R k {\displaystyle \mathbf {f} :[a,b]\to \mathbb {R} ^{k}} differentiable on ( a , b ) {\displaystyle (a,b)} , there exists a number c ∈ ( a , b ) {\displaystyle c\in (a,b)} such that Take φ ( t ) = ( f ( b ) − f ( a ) ) ⋅ f ( t ) {\displaystyle \varphi (t)=({\textbf {f}}(b)-{\textbf {f}}(a))\cdot {\textbf {f}}(t)} . Then φ {\displaystyle \varphi } is real-valued and thus, by the mean value theorem, for some c ∈ ( a , b ) {\displaystyle c\in (a,b)} . Now, φ ( b ) − φ ( a ) = | f ( b ) − f ( a ) | 2 {\displaystyle \varphi (b)-\varphi (a)=|{\textbf {f}}(b)-{\textbf {f}}(a)|^{2}} and φ ′ ( c ) = ( f ( b ) − f ( a ) ) ⋅ f ′ ( c ) . {\displaystyle \varphi '(c)=({\textbf {f}}(b)-{\textbf {f}}(a))\cdot {\textbf {f}}'(c).} Hence, using the Cauchy–Schwarz inequality , from the above equation, we get: If f ( b ) = f ( a ) {\displaystyle {\textbf {f}}(b)={\textbf {f}}(a)} , the theorem holds trivially. Otherwise, dividing both sides by | f ( b ) − f ( a ) | {\displaystyle |{\textbf {f}}(b)-{\textbf {f}}(a)|} yields the theorem. Jean Dieudonné in his classic treatise Foundations of Modern Analysis discards the mean value theorem and replaces it by mean inequality as the proof is not constructive and one cannot find the mean value and in applications one only needs mean inequality. Serge Lang in Analysis I uses the mean value theorem, in integral form, as an instant reflex but this use requires the continuity of the derivative. If one uses the Henstock–Kurzweil integral one can have the mean value theorem in integral form without the additional assumption that derivative should be continuous as every derivative is Henstock–Kurzweil integrable. The reason why there is no analog of mean value equality is the following: If f : U → R m is a differentiable function (where U ⊂ R n is open) and if x + th , x , h ∈ R n , t ∈ [0, 1] is the line segment in question (lying inside U ), then one can apply the above parametrization procedure to each of the component functions f i ( i = 1, …, m ) of f (in the above notation set y = x + h ). In doing so one finds points x + t i h on the line segment satisfying But generally there will not be a single point x + t * h on the line segment satisfying for all i simultaneously . For example, define: Then f ( 2 π ) − f ( 0 ) = 0 ∈ R 2 {\displaystyle f(2\pi )-f(0)=\mathbf {0} \in \mathbb {R} ^{2}} , but f 1 ′ ( x ) = − sin ⁡ ( x ) {\displaystyle f_{1}'(x)=-\sin(x)} and f 2 ′ ( x ) = cos ⁡ ( x ) {\displaystyle f_{2}'(x)=\cos(x)} are never simultaneously zero as x {\displaystyle x} ranges over [ 0 , 2 π ] {\displaystyle \left[0,2\pi \right]} . The above theorem implies the following: Mean value inequality [ 10 ] — For a continuous function f : [ a , b ] → R k {\displaystyle {\textbf {f}}:[a,b]\to \mathbb {R} ^{k}} , if f {\displaystyle {\textbf {f}}} is differentiable on ( a , b ) {\displaystyle (a,b)} , then In fact, the above statement suffices for many applications and can be proved directly as follows. (We shall write f {\displaystyle f} for f {\displaystyle {\textbf {f}}} for readability.) First assume f {\displaystyle f} is differentiable at a {\displaystyle a} too. If f ′ {\displaystyle f'} is unbounded on ( a , b ) {\displaystyle (a,b)} , there is nothing to prove. Thus, assume sup ( a , b ) | f ′ | < ∞ {\displaystyle \sup _{(a,b)}|f'|<\infty } . Let M > sup ( a , b ) | f ′ | {\displaystyle M>\sup _{(a,b)}|f'|} be some real number. Let E = { 0 ≤ t ≤ 1 ∣ | f ( a + t ( b − a ) ) − f ( a ) | ≤ M t ( b − a ) } . {\displaystyle E=\{0\leq t\leq 1\mid |f(a+t(b-a))-f(a)|\leq Mt(b-a)\}.} We want to show 1 ∈ E {\displaystyle 1\in E} . By continuity of f {\displaystyle f} , the set E {\displaystyle E} is closed. It is also nonempty as 0 {\displaystyle 0} is in it. Hence, the set E {\displaystyle E} has the largest element s {\displaystyle s} . If s = 1 {\displaystyle s=1} , then 1 ∈ E {\displaystyle 1\in E} and we are done. Thus suppose otherwise. For 1 > t > s {\displaystyle 1>t>s} , Let ϵ > 0 {\displaystyle \epsilon >0} be such that M − ϵ > sup ( a , b ) | f ′ | {\displaystyle M-\epsilon >\sup _{(a,b)}|f'|} . By the differentiability of f {\displaystyle f} at a + s ( b − a ) {\displaystyle a+s(b-a)} (note s {\displaystyle s} may be 0), if t {\displaystyle t} is sufficiently close to s {\displaystyle s} , the first term is ≤ ϵ ( t − s ) ( b − a ) {\displaystyle \leq \epsilon (t-s)(b-a)} . The second term is ≤ ( M − ϵ ) ( t − s ) ( b − a ) {\displaystyle \leq (M-\epsilon )(t-s)(b-a)} . The third term is ≤ M s ( b − a ) {\displaystyle \leq Ms(b-a)} . Hence, summing the estimates up, we get: | f ( a + t ( b − a ) ) − f ( a ) | ≤ t M | b − a | {\displaystyle |f(a+t(b-a))-f(a)|\leq tM|b-a|} , a contradiction to the maximality of s {\displaystyle s} . Hence, 1 = s ∈ M {\displaystyle 1=s\in M} and that means: Since M {\displaystyle M} is arbitrary, this then implies the assertion. Finally, if f {\displaystyle f} is not differentiable at a {\displaystyle a} , let a ′ ∈ ( a , b ) {\displaystyle a'\in (a,b)} and apply the first case to f {\displaystyle f} restricted on [ a ′ , b ] {\displaystyle [a',b]} , giving us: since ( a ′ , b ) ⊂ ( a , b ) {\displaystyle (a',b)\subset (a,b)} . Letting a ′ → a {\displaystyle a'\to a} finishes the proof. All conditions for the mean value theorem are necessary: When one of the above conditions is not satisfied, the mean value theorem is not valid in general, and so it cannot be applied. The necessity of the first condition can be seen by the counterexample where the function f ( x ) = | x | {\displaystyle f(x)=|x|} on [-1,1] is not differentiable. The necessity of the second condition can be seen by the counterexample where the function f ( x ) = { 1 , at x = 0 0 , if x ∈ ( 0 , 1 ] {\displaystyle f(x)={\begin{cases}1,&{\text{at }}x=0\\0,&{\text{if }}x\in (0,1]\end{cases}}} satisfies criteria 1 since f ′ ( x ) = 0 {\displaystyle f'(x)=0} on ( 0 , 1 ) {\displaystyle (0,1)} but not criteria 2 since f ( 1 ) − f ( 0 ) 1 − 0 = − 1 {\displaystyle {\frac {f(1)-f(0)}{1-0}}=-1} and − 1 ≠ 0 = f ′ ( x ) {\displaystyle -1\neq 0=f'(x)} for all x ∈ ( 0 , 1 ) {\displaystyle x\in (0,1)} so no such c {\displaystyle c} exists. The theorem is false if a differentiable function is complex-valued instead of real-valued. For example, if f ( x ) = e x i {\displaystyle f(x)=e^{xi}} for all real x {\displaystyle x} , then f ( 2 π ) − f ( 0 ) = 0 = 0 ( 2 π − 0 ) {\displaystyle f(2\pi )-f(0)=0=0(2\pi -0)} while f ′ ( x ) ≠ 0 {\displaystyle f'(x)\neq 0} for any real x {\displaystyle x} . Let f : [ a , b ] → R be a continuous function. Then there exists c in ( a , b ) such that This follows at once from the fundamental theorem of calculus , together with the mean value theorem for derivatives. Since the mean value of f on [ a , b ] is defined as we can interpret the conclusion as f achieves its mean value at some c in ( a , b ). [ 12 ] In general, if f : [ a , b ] → R is continuous and g is an integrable function that does not change sign on [ a , b ], then there exists c in ( a , b ) such that There are various slightly different theorems called the second mean value theorem for definite integrals . A commonly found version is as follows: Here G ( a + ) {\displaystyle G(a^{+})} stands for lim x → a + G ( x ) {\textstyle {\lim _{x\to a^{+}}G(x)}} , the existence of which follows from the conditions. Note that it is essential that the interval ( a , b ] contains b . A variant not having this requirement is: [ 13 ] If the function G {\displaystyle G} returns a multi-dimensional vector, then the MVT for integration is not true, even if the domain of G {\displaystyle G} is also multi-dimensional. For example, consider the following 2-dimensional function defined on an n {\displaystyle n} -dimensional cube: Then, by symmetry it is easy to see that the mean value of G {\displaystyle G} over its domain is (0,0): However, there is no point in which G = ( 0 , 0 ) {\displaystyle G=(0,0)} , because | G | = 1 {\displaystyle |G|=1} everywhere. Assume that f , g , {\displaystyle f,g,} and h {\displaystyle h} are differentiable functions on ( a , b ) {\displaystyle (a,b)} that are continuous on [ a , b ] {\displaystyle [a,b]} . Define There exists c ∈ ( a , b ) {\displaystyle c\in (a,b)} such that D ′ ( c ) = 0 {\displaystyle D'(c)=0} . Notice that and if we place h ( x ) = 1 {\displaystyle h(x)=1} , we get Cauchy's mean value theorem . If we place h ( x ) = 1 {\displaystyle h(x)=1} and g ( x ) = x {\displaystyle g(x)=x} we get Lagrange's mean value theorem . The proof of the generalization is quite simple: each of D ( a ) {\displaystyle D(a)} and D ( b ) {\displaystyle D(b)} are determinants with two identical rows, hence D ( a ) = D ( b ) = 0 {\displaystyle D(a)=D(b)=0} . The Rolle's theorem implies that there exists c ∈ ( a , b ) {\displaystyle c\in (a,b)} such that D ′ ( c ) = 0 {\displaystyle D'(c)=0} . Let X and Y be non-negative random variables such that E[ X ] < E[ Y ] < ∞ and X ≤ s t Y {\displaystyle X\leq _{st}Y} (i.e. X is smaller than Y in the usual stochastic order ). Then there exists an absolutely continuous non-negative random variable Z having probability density function Let g be a measurable and differentiable function such that E[ g ( X )], E[ g ( Y )] < ∞, and let its derivative g′ be measurable and Riemann-integrable on the interval [ x , y ] for all y ≥ x ≥ 0. Then, E[ g′ ( Z )] is finite and [ 14 ] As noted above, the theorem does not hold for differentiable complex-valued functions. Instead, a generalization of the theorem is stated such: [ 15 ] Let f : Ω → C be a holomorphic function on the open convex set Ω, and let a and b be distinct points in Ω. Then there exist points u , v on the interior of the line segment from a to b such that Where Re() is the real part and Im() is the imaginary part of a complex-valued function.
https://en.wikipedia.org/wiki/Mean_value_theorem
In mathematics , a meander or closed meander is a self-avoiding closed curve which crosses a given line a number of times, meaning that it intersects the line while passing from one side to the other. Intuitively, a meander can be viewed as a meandering river with a straight road crossing the river over a number of bridges. The points where the line and the curve cross are therefore referred to as "bridges". Given a fixed line L in the Euclidean plane , a meander of order n is a self-avoiding closed curve in the plane that crosses the line at 2 n points. Two meanders are equivalent if one meander can be continuously deformed into the other while maintaining its property of being a meander and leaving the order of the bridges on the road, in the order in which they are crossed, invariant. The single meander of order 1 intersects the line twice: This meander intersects the line four times and thus has order 2: There are two meanders of order 2. Flipping the image vertically produces the other. There are three non-equivalent meanders of order 3, each intersecting the line six times. Here are two of them: The number of distinct meanders of order n is the meandric number M n . The first fifteen meandric numbers are given below (sequence A005315 in the OEIS ). A meandric permutation of order n is defined on the set {1, 2, ..., 2 n } and is determined as follows: In the diagram on the right, the order 4 meandric permutation is given by (1 8 5 4 3 6 7 2). This is a permutation written in cyclic notation and not to be confused with one-line notation . If π is a meandric permutation, then π 2 consists of two cycles , one containing all the even symbols and the other all the odd symbols. Permutations with this property are called alternate permutations , since the symbols in the original permutation alternate between odd and even integers. However, not all alternate permutations are meandric because it may not be possible to draw them without introducing a self-intersection in the curve. For example, the order 3 alternate permutation, (1 4 3 6 5 2), is not meandric. Given a fixed line L in the Euclidean plane, an open meander of order n is a non-self-intersecting curve in the plane that crosses the line at n points. Two open meanders are equivalent if one can be continuously deformed into the other while maintaining its property of being an open meander and leaving the order of the bridges on the road, in the order in which they are crossed, invariant. The open meander of order 1 intersects the line once: The open meander of order 2 intersects the line twice: The number of distinct open meanders of order n is the open meandric number m n . The first fifteen open meandric numbers are given below (sequence A005316 in the OEIS ). Given a fixed oriented ray R (a closed half line) in the Euclidean plane, a semi-meander of order n is a non-self-intersecting closed curve in the plane that crosses the ray at n points. Two semi-meanders are equivalent if one can be continuously deformed into the other while maintaining its property of being a semi-meander and leaving the order of the bridges on the ray, in the order in which they are crossed, invariant. The semi-meander of order 1 intersects the ray once: The semi-meander of order 2 intersects the ray twice: The number of distinct semi-meanders of order n is the semi-meandric number M n (usually denoted with an overline instead of an underline). The first fifteen semi-meandric numbers are given below (sequence A000682 in the OEIS ). There is an injective function from meandric to open meandric numbers: Each meandric number can be bounded by semi-meandric numbers: For n > 1, meandric numbers are even :
https://en.wikipedia.org/wiki/Meander_(mathematics)
Non-linguistic (or pre-linguistic ) meaning is a type of meaning not mediated or perceived through linguistic signs . In linguistics , the concept is used in discussions. It is whether about such meaning is different from meaning expressed through language (i.e. semantics ), It is also Interesting, should play a role in linguistic theory , or to which extent thought and conceptualization is affected by linguistic knowledge (as in the language of thought hypothesis or linguistic relativity ). The sense that sentient creatures have that various objects of our universe are linked is commonly referred to as a person's sense of "meaning". This is the sense of meaning at work when asking a person when they leave a theater, "What did that movie mean to you?" In short, the word "meaning" can sometimes be used to describe the interpretations that people have of the world. Example: "Chunks are pieces of information linked and bound by meaning. (Remembering details vs. getting an overall meaning) links individual memory traces together, to create conceptual chunks." [ 1 ] Basic or non-idealized meaning as a type of semantics is a branch of psychology and ethics and reflects the original use of the term "meaning" as understood early in the 20th century by Lady Welby after her daughter had translated the term "semantics" from French. On the other hand, meaning, in so far as it was later objectified by not considering particular situations and the real intentions of speakers and writers, examines the ways in which words, phrases, and sentences can seem to have meaning. Objectified semantics is contrasted with communication-focused semantics where understanding the intent and assumptions of particular speakers and writers is primary as in the idea that people mean and not words, sentences or propositions. An underlying difference is that where causes are identified with relations or laws then it is normal to objectify meaning and consider it a branch of linguistics, while if causes are identified with particular agents, objects, or forces as if to cause means to influence as most historians and practical people assume, then real or non-objectified meaning is primary and we are dealing with intent or purpose as an aspect of human psychology, especially since human intent can be and often is independent of language and linguistics. Connotation , such as good or bad reputation, in contrast to denotation , can be considered a kind of non-linguistic meaning. The word "meaning" can be used to describe the internal workings of the mind, independently of any linguistic activity. This sort of meaning is deeply psychological. If we look for other uses we can find intent, feeling, implication, importance, value, and signification. Since the negative form "meaningless" challenges and would deny these uses, experts believe that underlying them all are understanding and understandability. One approach to this way of understanding meaning was the psychosocial theorist Erik Erikson . Erikson had a certain perspective on the role of meaning in the process of human bodily development and socialization. Within his model, a "meaning" is the external source of gratification associated with the human erogenous zones and their respective modes. See imprinting (psychology) for some related topics. Some communication by body language arises out of bodily signals that follow directly out of human instinct. Blushing , tears , erections and the startle reaction are examples. This type of communication is usually unintentional, but nevertheless conveys certain information to anyone present. Non-linguistic meaning may be identified as pragmatics , and include beliefs, implicatures , social factors and other features of the context. [ 2 ] : 199 Paul Grice distinguished natural (i.e. non-linguistic) from non-natural meaning, as the latter is intention-based. [ 3 ] : 214 This perspective is related to the pragmatists , who insist that the meaning of an expression is its consequences. A proponent of this view was Charles Sanders Peirce , who wrote the following: The whole function of thought is to produce habits of action... To develop its meaning, we have, therefore, simply to determine what habits it produces, for what a thing means is simply what habits it involves. Now, the identity of a habit depends on how it might lead us to act, not merely under such circumstances as are likely to arise, but under such as might possibly occur, no matter how improbable they may be. ...I only desire to point out how impossible it is that we should have an idea in our minds which relates to anything but conceived sensible effects of things. Our idea of anything is our idea of its sensible effects; and if we fancy that we have any other we deceive ourselves. Outside of the Pragmatic tradition was Canadian 20th century philosopher of media Marshall McLuhan . His famous dictum, "the medium is the message", can be understood to be a consequentialist theory of meaning . His idea was that the medium which is used to communicate carries with it information: namely, the consequences that arise from the fact that the medium has become popular. For example, one "meaning" of the light bulb might be the idea of being able to read during the night. Some non-linguistic meaning emerges from natural history as a development over vast periods of time. This is the theory behind autopoiesis and self-organization . Some social scientists use autopoiesis as a model for the development of structural coupling in the family. A typical example of this kind of relationship is the predator-prey relationship. These relations carry strong intrinsic (life and death) meaning for all living organisms, including people. Observations of child development and of behavioral abnormalities in some people indicate that some innate capabilities of human beings are essential to the process of meaning creation. Two examples are: Ideasthesia refers to the capability of our minds to experience meaning. When concepts are activated i.e., when the meaning is extracted, the phenomenal experiences are affected. This tight relationship between meaning and experiences is investigated by research on ideasthesia.
https://en.wikipedia.org/wiki/Meaning_(non-linguistic)
This is a list of minor planets which have been officially named by the Working Group for Small Bodies Nomenclature (WGSBN) of the International Astronomical Union (IAU). The list consists of partial pages, each covering a number range of 1,000 bodies citing the source after each minor planet was named for. An overview of all existing partial pages is given in section § Index . Among the hundreds of thousands of numbered minor planets only a small fraction have received a name so far. As of 10 June 2024 [update] , there are 24,795 named minor planets out of a total of more than 600,000 numbered ones (also see List of minor planets § Main index as numbers increase constantly) . [ 1 ] Most of these bodies are named for people , in particular astronomers, as well as figures from mythology and fiction. Many minor planets are also named after places such cities, towns, and villages, mountains and volcanoes; after rivers , observatories, as well as organizations, clubs and astronomical societies. Some are named after animals and plants. A few minor planets are named after exotic entities such as supercomputers or have an unknown origin. The first few thousand minor planets have all been named, with the near-Earth asteroid (4596) 1981 QB currently being the lowest-numbered unnamed minor planet. [ 2 ] The first 3 pages in the below table contain 1,000 named entries each. The first 13 and 33 pages contain at least 500 and 100 named entries each, respectively. The first range to contain no entries is 307001–308000 . There are also several name conflicts with other astronomical objects , mostly with planetary satellites and among themselves. Following a proposal of the discovering astronomer , new minor planet names are approved and published by IAU's WGSBN several times a year. [ 1 ] The WGSBN applies a set of rules for naming minor planets. [ 3 ] These range from syntax restrictions to non-offensive meanings. Over the years the rules have changed several times. In the beginning, for example, most minor planets were named after female characters from Greek and Roman mythology. [ 3 ] This is an overview of all existing partial lists on the meanings of minor planets ( MoMP ). Each table covers 100,000 minor planets, with each cell representing a specific partial list of 1,000 sequentially numbered bodies. Grayed out cells do not yet contain any citations for the corresponding number range. For an introduction, see § top . Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of".
https://en.wikipedia.org/wiki/Meanings_of_minor-planet_names
In mathematics , a measure-preserving dynamical system is an object of study in the abstract formulation of dynamical systems , and ergodic theory in particular. Measure-preserving systems obey the Poincaré recurrence theorem , and are a special case of conservative systems . They provide the formal, mathematical basis for a broad range of physical systems, and, in particular, many systems from classical mechanics (in particular, most non-dissipative systems) as well as systems in thermodynamic equilibrium . A measure-preserving dynamical system is defined as a probability space and a measure-preserving transformation on it. In more detail, it is a system with the following structure: One may ask why the measure preserving transformation is defined in terms of the inverse μ ( T − 1 ( A ) ) = μ ( A ) {\displaystyle \mu (T^{-1}(A))=\mu (A)} instead of the forward transformation μ ( T ( A ) ) = μ ( A ) {\displaystyle \mu (T(A))=\mu (A)} . This can be understood intuitively. Consider the typical measure on the unit interval [ 0 , 1 ] {\displaystyle [0,1]} , and a map T x = 2 x mod 1 = { 2 x if x < 1 / 2 2 x − 1 if x > 1 / 2 {\displaystyle Tx=2x\mod 1={\begin{cases}2x{\text{ if }}x<1/2\\2x-1{\text{ if }}x>1/2\\\end{cases}}} . This is the Bernoulli map . Now, distribute an even layer of paint on the unit interval [ 0 , 1 ] {\displaystyle [0,1]} , and then map the paint forward. The paint on the [ 0 , 1 / 2 ] {\displaystyle [0,1/2]} half is spread thinly over all of [ 0 , 1 ] {\displaystyle [0,1]} , and the paint on the [ 1 / 2 , 1 ] {\displaystyle [1/2,1]} half as well. The two layers of thin paint, layered together, recreates the exact same paint thickness. More generally, the paint that would arrive at subset A ⊂ [ 0 , 1 ] {\displaystyle A\subset [0,1]} comes from the subset T − 1 ( A ) {\displaystyle T^{-1}(A)} . For the paint thickness to remain unchanged (measure-preserving), the mass of incoming paint should be the same: μ ( A ) = μ ( T − 1 ( A ) ) {\displaystyle \mu (A)=\mu (T^{-1}(A))} . Consider a mapping T {\displaystyle {\mathcal {T}}} of power sets : Consider now the special case of maps T {\displaystyle {\mathcal {T}}} which preserve intersections, unions and complements (so that it is a map of Borel sets ) and also sends X {\displaystyle X} to X {\displaystyle X} (because we want it to be conservative ). Every such conservative, Borel-preserving map can be specified by some surjective map T : X → X {\displaystyle T:X\to X} by writing T ( A ) = T − 1 ( A ) {\displaystyle {\mathcal {T}}(A)=T^{-1}(A)} . Of course, one could also define T ( A ) = T ( A ) {\displaystyle {\mathcal {T}}(A)=T(A)} , but this is not enough to specify all such possible maps T {\displaystyle {\mathcal {T}}} . That is, conservative, Borel-preserving maps T {\displaystyle {\mathcal {T}}} cannot, in general, be written in the form T ( A ) = T ( A ) ; {\displaystyle {\mathcal {T}}(A)=T(A);} . μ ( T − 1 ( A ) ) {\displaystyle \mu (T^{-1}(A))} has the form of a pushforward , whereas μ ( T ( A ) ) {\displaystyle \mu (T(A))} is generically called a pullback . Almost all properties and behaviors of dynamical systems are defined in terms of the pushforward. For example, the transfer operator is defined in terms of the pushforward of the transformation map T {\displaystyle T} ; the measure μ {\displaystyle \mu } can now be understood as an invariant measure ; it is just the Frobenius–Perron eigenvector of the transfer operator (recall, the FP eigenvector is the largest eigenvector of a matrix; in this case it is the eigenvector which has the eigenvalue one: the invariant measure.) There are two classification problems of interest. One, discussed below, fixes ( X , B , μ ) {\displaystyle (X,{\mathcal {B}},\mu )} and asks about the isomorphism classes of a transformation map T {\displaystyle T} . The other, discussed in transfer operator , fixes ( X , B ) {\displaystyle (X,{\mathcal {B}})} and T {\displaystyle T} , and asks about maps μ {\displaystyle \mu } that are measure-like. Measure-like, in that they preserve the Borel properties, but are no longer invariant; they are in general dissipative and so give insights into dissipative systems and the route to equilibrium. In terms of physics, the measure-preserving dynamical system ( X , B , μ , T ) {\displaystyle (X,{\mathcal {B}},\mu ,T)} often describes a physical system that is in equilibrium, for example, thermodynamic equilibrium . One might ask: how did it get that way? Often, the answer is by stirring, mixing , turbulence , thermalization or other such processes. If a transformation map T {\displaystyle T} describes this stirring, mixing, etc. then the system ( X , B , μ , T ) {\displaystyle (X,{\mathcal {B}},\mu ,T)} is all that is left, after all of the transient modes have decayed away. The transient modes are precisely those eigenvectors of the transfer operator that have eigenvalue less than one; the invariant measure μ {\displaystyle \mu } is the one mode that does not decay away. The rate of decay of the transient modes are given by (the logarithm of) their eigenvalues; the eigenvalue one corresponds to infinite half-life. The microcanonical ensemble from physics provides an informal example. Consider, for example, a fluid, gas or plasma in a box of width, length and height w × l × h , {\displaystyle w\times l\times h,} consisting of N {\displaystyle N} atoms. A single atom in that box might be anywhere, having arbitrary velocity; it would be represented by a single point in w × l × h × R 3 . {\displaystyle w\times l\times h\times \mathbb {R} ^{3}.} A given collection of N {\displaystyle N} atoms would then be a single point somewhere in the space ( w × l × h ) N × R 3 N . {\displaystyle (w\times l\times h)^{N}\times \mathbb {R} ^{3N}.} The "ensemble" is the collection of all such points, that is, the collection of all such possible boxes (of which there are an uncountably-infinite number). This ensemble of all-possible-boxes is the space X {\displaystyle X} above. In the case of an ideal gas , the measure μ {\displaystyle \mu } is given by the Maxwell–Boltzmann distribution . It is a product measure , in that if p i ( x , y , z , v x , v y , v z ) d 3 x d 3 p {\displaystyle p_{i}(x,y,z,v_{x},v_{y},v_{z})\,d^{3}x\,d^{3}p} is the probability of atom i {\displaystyle i} having position and velocity x , y , z , v x , v y , v z {\displaystyle x,y,z,v_{x},v_{y},v_{z}} , then, for N {\displaystyle N} atoms, the probability is the product of N {\displaystyle N} of these. This measure is understood to apply to the ensemble. So, for example, one of the possible boxes in the ensemble has all of the atoms on one side of the box. One can compute the likelihood of this, in the Maxwell–Boltzmann measure. It will be enormously tiny, of order O ( 2 − 3 N ) . {\displaystyle {\mathcal {O}}\left(2^{-3N}\right).} Of all possible boxes in the ensemble, this is a ridiculously small fraction. The only reason that this is an "informal example" is because writing down the transition function T {\displaystyle T} is difficult, and, even if written down, it is hard to perform practical computations with it. Difficulties are compounded if there are interactions between the particles themselves, like a van der Waals interaction or some other interaction suitable for a liquid or a plasma; in such cases, the invariant measure is no longer the Maxwell–Boltzmann distribution. The art of physics is finding reasonable approximations. This system does exhibit one key idea from the classification of measure-preserving dynamical systems: two ensembles, having different temperatures, are inequivalent. The entropy for a given canonical ensemble depends on its temperature; as physical systems, it is "obvious" that when the temperatures differ, so do the systems. This holds in general: systems with different entropy are not isomorphic. Unlike the informal example above, the examples below are sufficiently well-defined and tractable that explicit, formal computations can be performed. The definition of a measure-preserving dynamical system can be generalized to the case in which T is not a single transformation that is iterated to give the dynamics of the system, but instead is a monoid (or even a group , in which case we have the action of a group upon the given probability space) of transformations T s : X → X parametrized by s ∈ Z (or R , or N ∪ {0}, or [0, +∞)), where each transformation T s satisfies the same requirements as T above. [ 1 ] In particular, the transformations obey the rules: The earlier, simpler case fits into this framework by defining T s = T s for s ∈ N . The concept of a homomorphism and an isomorphism may be defined. Consider two dynamical systems ( X , A , μ , T ) {\displaystyle (X,{\mathcal {A}},\mu ,T)} and ( Y , B , ν , S ) {\displaystyle (Y,{\mathcal {B}},\nu ,S)} . Then a mapping is a homomorphism of dynamical systems if it satisfies the following three properties: The system ( Y , B , ν , S ) {\displaystyle (Y,{\mathcal {B}},\nu ,S)} is then called a factor of ( X , A , μ , T ) {\displaystyle (X,{\mathcal {A}},\mu ,T)} . The map φ {\displaystyle \varphi \;} is an isomorphism of dynamical systems if, in addition, there exists another mapping that is also a homomorphism, which satisfies Hence, one may form a category of dynamical systems and their homomorphisms. A point x ∈ X is called a generic point if the orbit of the point is distributed uniformly according to the measure. Consider a dynamical system ( X , B , T , μ ) {\displaystyle (X,{\mathcal {B}},T,\mu )} , and let Q = { Q 1 , ..., Q k } be a partition of X into k measurable pair-wise disjoint sets. Given a point x ∈ X , clearly x belongs to only one of the Q i . Similarly, the iterated point T n x can belong to only one of the parts as well. The symbolic name of x , with regards to the partition Q , is the sequence of integers { a n } such that The set of symbolic names with respect to a partition is called the symbolic dynamics of the dynamical system. A partition Q is called a generator or generating partition if μ-almost every point x has a unique symbolic name. Given a partition Q = { Q 1 , ..., Q k } and a dynamical system ( X , B , T , μ ) {\displaystyle (X,{\mathcal {B}},T,\mu )} , define the T -pullback of Q as Further, given two partitions Q = { Q 1 , ..., Q k } and R = { R 1 , ..., R m }, define their refinement as With these two constructs, the refinement of an iterated pullback is defined as which plays crucial role in the construction of the measure-theoretic entropy of a dynamical system. The entropy of a partition Q {\displaystyle {\mathcal {Q}}} is defined as [ 2 ] [ 3 ] The measure-theoretic entropy of a dynamical system ( X , B , T , μ ) {\displaystyle (X,{\mathcal {B}},T,\mu )} with respect to a partition Q = { Q 1 , ..., Q k } is then defined as Finally, the Kolmogorov–Sinai metric or measure-theoretic entropy of a dynamical system ( X , B , T , μ ) {\displaystyle (X,{\mathcal {B}},T,\mu )} is defined as where the supremum is taken over all finite measurable partitions. A theorem of Yakov Sinai in 1959 shows that the supremum is actually obtained on partitions that are generators. Thus, for example, the entropy of the Bernoulli process is log 2, since almost every real number has a unique binary expansion . That is, one may partition the unit interval into the intervals [0, 1/2) and [1/2, 1]. Every real number x is either less than 1/2 or not; and likewise so is the fractional part of 2 n x . If the space X is compact and endowed with a topology, or is a metric space, then the topological entropy may also be defined. If T {\displaystyle T} is an ergodic, piecewise expanding, and Markov on X ⊂ R {\displaystyle X\subset \mathbb {R} } , and μ {\displaystyle \mu } is absolutely continuous with respect to the Lebesgue measure, then we have the Rokhlin formula [ 4 ] (section 4.3 and section 12.3 [ 5 ] ): h μ ( T ) = ∫ ln ⁡ | d T / d x | μ ( d x ) {\displaystyle h_{\mu }(T)=\int \ln |dT/dx|\mu (dx)} This allows calculation of entropy of many interval maps, such as the logistic map . Ergodic means that T − 1 ( A ) = A {\displaystyle T^{-1}(A)=A} implies A {\displaystyle A} has full measure or zero measure. Piecewise expanding and Markov means that there is a partition of X {\displaystyle X} into finitely many open intervals, such that for some ϵ > 0 {\displaystyle \epsilon >0} , | T ′ | ≥ 1 + ϵ {\displaystyle |T'|\geq 1+\epsilon } on each open interval. Markov means that for each I i {\displaystyle I_{i}} from those open intervals, either T ( I i ) ∩ I i = ∅ {\displaystyle T(I_{i})\cap I_{i}=\emptyset } or T ( I i ) ∩ I i = I i {\displaystyle T(I_{i})\cap I_{i}=I_{i}} . One of the primary activities in the study of measure-preserving systems is their classification according to their properties. That is, let ( X , B , μ ) {\displaystyle (X,{\mathcal {B}},\mu )} be a measure space, and let U {\displaystyle U} be the set of all measure preserving systems ( X , B , μ , T ) {\displaystyle (X,{\mathcal {B}},\mu ,T)} . An isomorphism S ∼ T {\displaystyle S\sim T} of two transformations S , T {\displaystyle S,T} defines an equivalence relation R ⊂ U × U . {\displaystyle {\mathcal {R}}\subset U\times U.} The goal is then to describe the relation R {\displaystyle {\mathcal {R}}} . A number of classification theorems have been obtained; but quite interestingly, a number of anti-classification theorems have been found as well. The anti-classification theorems state that there are more than a countable number of isomorphism classes, and that a countable amount of information is not sufficient to classify isomorphisms. [ 6 ] [ 7 ] The first anti-classification theorem, due to Hjorth, states that if U {\displaystyle U} is endowed with the weak topology , then the set R {\displaystyle {\mathcal {R}}} is not a Borel set . [ 8 ] There are a variety of other anti-classification results. For example, replacing isomorphism with Kakutani equivalence , it can be shown that there are uncountably many non-Kakutani equivalent ergodic measure-preserving transformations of each entropy type. [ 9 ] These stand in contrast to the classification theorems. These include: Krieger finite generator theorem [ 14 ] (Krieger 1970) — Given a dynamical system on a Lebesgue space of measure 1, where T {\textstyle T} is invertible, measure preserving, and ergodic. If h T ≤ ln ⁡ k {\displaystyle h_{T}\leq \ln k} for some integer k {\displaystyle k} , then the system has a size- k {\displaystyle k} generator. If the entropy is exactly equal to ln ⁡ k {\displaystyle \ln k} , then such a generator exists iff the system is isomorphic to the Bernoulli shift on k {\displaystyle k} symbols with equal measures.
https://en.wikipedia.org/wiki/Measure-preserving_dynamical_system
The measure problem in cosmology concerns how to compute the ratios of universes of different types within a multiverse . It typically arises in the context of eternal inflation . The problem arises because different approaches to calculating these ratios yield different results, and it is not clear which approach (if any) is correct. [ 1 ] Measures can be evaluated by whether they predict observed physical constants, as well as whether they avoid counterintuitive implications, such as the youngness paradox or Boltzmann brains . [ 2 ] While dozens of measures have been proposed, [ 3 ] : 2 few physicists consider the problem to be solved. [ 4 ] Infinite multiverse theories are becoming increasingly popular, but because they involve infinitely many instances of different types of universes, it is unclear how to compute the fractions of each type of universe. [ 4 ] Alan Guth put it this way: [ 4 ] Sean M. Carroll offered another informal example: [ 1 ] Different procedures for computing the limit of this fraction yield wildly different answers. [ 1 ] One way to illustrate how different regularization methods produce different answers is to calculate the limit of the fraction of sets of positive integers that are even . Suppose the integers are ordered the usual way, At a cutoff of "the first five elements of the list", the fraction is 2/5; at a cutoff of "the first six elements" the fraction is 1/2; the limit of the fraction, as the subset grows, converges to 1/2. However, if the integers are ordered such that any odd number is followed by two consecutive even numbers, the limit of the fraction of integers that are even converges to 2/3 rather than 1/2. [ 5 ] A popular way to decide what ordering to use in regularization is to pick the simplest or most natural-seeming method of ordering. Everyone agrees that the first sequence, ordered by increasing size of the integers, seems more natural. Similarly, many physicists agree that the "proper-time cutoff measure" (below) seems the simplest and most natural method of regularization. Unfortunately, the proper-time cutoff measure seems to produce incorrect results. [ 3 ] : 2 [ 5 ] The measure problem is important in cosmology because in order to compare cosmological theories in an infinite multiverse, we need to know which types of universes they predict to be more common than others. [ 4 ] The proper-time cutoff measure considers the probability P ( ϕ , t ) {\displaystyle P(\phi ,t)} of finding a given scalar field ϕ {\displaystyle \phi } at a given proper time t {\displaystyle t} . [ 3 ] : 1–2 During inflation , the region around a point grows like e 3 H Δ t {\displaystyle e^{3H\Delta t}} in a small proper-time interval Δ t {\displaystyle \Delta t} , [ 3 ] : 1 where H {\displaystyle H} is the Hubble parameter . This measure has the advantage of being stationary in the sense that probabilities remain the same over time in the limit of large t {\displaystyle t} . [ 3 ] : 1 However, it suffers from the youngness paradox , which has the effect of making it exponentially more probable that we would be in regions of high temperature, in conflict with what we observe; this is because regions that exited inflation later than our region, spent more time than us experiencing runaway inflationary exponential growth. [ 3 ] : 2 For example, observers in a Universe of 13.8 billion years old (our observed age) are outnumbered by observers in a 13.0 billion year old Universe by a factor of 10 10 60 {\displaystyle 10^{10^{60}}} . This lopsidedness continues, until the most numerous observers resembling us are "Boltzmann babies" formed by improbable fluctuations in the hot, very early, Universe. Therefore, physicists reject the simple proper-time cutoff as a failed hypothesis. [ 6 ] Time can be parameterized in different ways than proper time. [ 3 ] : 1 One choice is to parameterize by the scale factor of space a {\displaystyle a} , or more commonly by η ∼ log ⁡ a {\displaystyle \eta \sim \log a} . [ 3 ] : 1 Then a given region of space expands as e 3 Δ η {\displaystyle e^{3\Delta \eta }} , independent of H {\displaystyle H} . [ 3 ] : 1 This approach can be generalized to a family of measures in which a small region grows as e 3 H β Δ t β {\displaystyle e^{3H^{\beta }\Delta t_{\beta }}} for some β {\displaystyle \beta } and time-slicing approach t β {\displaystyle t_{\beta }} . [ 3 ] : 1–2 Any choice for β {\displaystyle \beta } remains stationary for large times. The scale-factor cutoff measure takes β = 0 {\displaystyle \beta =0} , which avoids the youngness paradox by not giving greater weight to regions that retain high energy density for long periods. [ 3 ] : 2 This measure is very sensitive to the choice of β {\displaystyle \beta } because any β > 0 {\displaystyle \beta >0} yields the youngness paradox, while any β < 0 {\displaystyle \beta <0} yields an "oldness paradox" in which most life is predicted to exist in cold, empty space as Boltzmann brains rather than as the evolved creatures with orderly experiences that we seem to be. [ 3 ] : 2 De Simone et al. (2010) consider the scale-factor cutoff measure to be a promising solution to the measure problem. [ 7 ] This measure has also been shown to produce good agreement with observational values of the cosmological constant . [ 8 ] The stationary measure proceeds from the observation that different processes achieve stationarity of P ( ϕ , t ) {\displaystyle P(\phi ,t)} at different times. [ 3 ] : 2 Thus, rather than comparing processes at a given time since the beginning, the stationary measure compares them in terms of time since each process individually become stationary. [ 3 ] : 2 For instance, different regions of the universe can be compared based on time since star formation began. [ 3 ] : 3 Andrei Linde and coauthors have suggested that the stationary measure avoids both the youngness paradox and Boltzmann brains. [ 2 ] However, the stationary measure predicts extreme (either very large or very small) values of the primordial density contrast Q {\displaystyle Q} and the gravitational constant G {\displaystyle G} , inconsistent with observations. [ 7 ] : 2 Reheating marks the end of inflation. The causal diamond is the finite four-volume formed by intersecting the future light cone of an observer crossing the reheating hypersurface with the past light cone of the point where the observer has exited a given vacuum. [ 3 ] : 2 Put another way, the causal diamond is [ 4 ] The causal diamond measure multiplies the following quantities: [ 9 ] : 1, 4 Different prior probabilities of vacuum types yield different results. [ 3 ] : 2 Entropy production can be approximated as the number of galaxies in the diamond. [ 3 ] : 2 The watcher measure imagines the world line of an eternal "watcher" that passes through an infinite number of Big Crunch singularities. [ 10 ] In all "cutoff" schemes for an expanding infinite multiverse, a finite percentage of observers reach the cutoff during their lifetimes. Under most schemes, if a current observer is still alive five billion years from now, then the later stages of their life must somehow be "discounted" by a factor of around two compared to their current stages of life. For such an observer, Bayes' theorem may appear to break down over this timescale due to anthropic selection effects; this hypothetical breakdown is sometimes called the "Guth–Vanchurin paradox". One proposed resolution to the paradox is to posit a physical "end of time" that has a fifty percent chance of occurring in the next few billion years. Another, overlapping, proposal is to posit that an observer no longer physically exists when it passes outside a given causal patch, similar to models where a particle is destroyed or ceases to exist when it falls through a black hole's event horizon. [ 11 ] [ 12 ] Guth and Vanchurin have pushed back on such "end of time" proposals, stating that while "(later) stages of my life will contribute (less) to multiversal averages" than earlier stages, this paradox need not be interpreted as a physical "end of time". The literature proposes at least five possible resolutions: [ 13 ] [ 14 ] Guth and Vanchurin hypothesize that standard probability theories might be incorrect, which would have counterintuitive consequences. [ 14 ]
https://en.wikipedia.org/wiki/Measure_problem_(cosmology)
A measure space is a basic object of measure theory , a branch of mathematics that studies generalized notions of volumes . It contains an underlying set, the subsets of this set that are feasible for measuring (the σ -algebra ) and the method that is used for measuring (the measure ). One important example of a measure space is a probability space . A measurable space consists of the first two components without a specific measure. A measure space is a triple ( X , A , μ ) , {\displaystyle (X,{\mathcal {A}},\mu ),} where [ 1 ] [ 2 ] In other words, a measure space consists of a measurable space ( X , A ) {\displaystyle (X,{\mathcal {A}})} together with a measure on it. Set X = { 0 , 1 } {\displaystyle X=\{0,1\}} . The σ {\textstyle \sigma } -algebra on finite sets such as the one above is usually the power set , which is the set of all subsets (of a given set) and is denoted by ℘ ( ⋅ ) . {\textstyle \wp (\cdot ).} Sticking with this convention, we set A = ℘ ( X ) {\displaystyle {\mathcal {A}}=\wp (X)} In this simple case, the power set can be written down explicitly: ℘ ( X ) = { ∅ , { 0 } , { 1 } , { 0 , 1 } } . {\displaystyle \wp (X)=\{\varnothing ,\{0\},\{1\},\{0,1\}\}.} As the measure, define μ {\textstyle \mu } by μ ( { 0 } ) = μ ( { 1 } ) = 1 2 , {\displaystyle \mu (\{0\})=\mu (\{1\})={\frac {1}{2}},} so μ ( X ) = 1 {\textstyle \mu (X)=1} (by additivity of measures) and μ ( ∅ ) = 0 {\textstyle \mu (\varnothing )=0} (by definition of measures). This leads to the measure space ( X , ℘ ( X ) , μ ) . {\textstyle (X,\wp (X),\mu ).} It is a probability space , since μ ( X ) = 1. {\textstyle \mu (X)=1.} The measure μ {\textstyle \mu } corresponds to the Bernoulli distribution with p = 1 2 , {\textstyle p={\frac {1}{2}},} which is for example used to model a fair coin flip. Most important classes of measure spaces are defined by the properties of their associated measures. This includes, in order of increasing generality: Another class of measure spaces are the complete measure spaces . [ 4 ]
https://en.wikipedia.org/wiki/Measure_space
In the oil industry measured depth (commonly referred to as MD, or just the depth ) is the length of the drilled borehole . [ 1 ] In conventional vertical wells, this coincides with the true vertical depth , but in directional or horizontal wells, especially those using extended reach drilling , the two can deviate greatly. For example, at the time of writing (2012) a borehole in Odoptu field, Sakhalin-I , has the greatest measured depth of any borehole at 12,345 m, but most of this is horizontal, giving it a true vertical depth of only 1,784 m. For comparison, the Kola Superdeep Borehole has a slightly shorter measured depth at 12,262 m, but since this is a vertical borehole, this is also equal to the true vertical depth, making the Kola Superdeep Borehole deeper by a factor of 6.9. This article related to natural gas, petroleum or the petroleum industry is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Measured_depth
An Measured Environmental Concentration (MEC) relates to a chemical substance found in an environmental sample. The concentration of the compound may result from direct contamination, transformation and/or metabolization of a different chemical contaminant, natural origin or a combination of these sources. MEC is to be used as a reference in the context of Chemical Safety Assessments (CSA) and should be compared with the respective Predicted Environmental Concentration (PEC) and Predicted No-Effect Concentration (PNEC) in order to decide whether exposure model is valid and the compound related risk is controlled. This toxicology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Measured_environmental_concentration
A variety of objective means exist to empirically measure biodiversity . Each measure relates to a particular use of the data, and is likely to be associated with the variety of genes. Biodiversity is commonly measured in terms of taxonomic richness of a geographic area over a time interval. In order to calculate biodiversity, species evenness, species richness, and species diversity are to be obtained first. Species evenness is the relative number of individuals of each species in a given area. [ 1 ] Species richness [ 2 ] is the number of species present in a given area. Species diversity [ 3 ] is the relationship between species evenness and species richness. There are many ways to measure biodiversity within a given ecosystem. However, the two most popular are Shannon-Weaver diversity index , [ 4 ] commonly referred to as Shannon diversity index, and the other is Simpsons diversity index . [ 5 ] Although many scientists prefer to use Shannon's diversity index simply because it takes into account species richness. [ 6 ] Biodiversity is usually plotted as the richness of a geographic area, with some reference to a temporal scale. Types of biodiversity include taxonomic or species , ecological , morphological, and genetic diversity . Taxonomic diversity, that is the number of species, genera, family is the most commonly assessed type. [ 7 ] A few studies have attempted to quantitatively clarify the relationship between different types of diversity. For example, the biologist Sarda Sahney has found a close link between vertebrate taxonomic and ecological diversity. [ 8 ] Conservation biologists have also designed a variety of objective means to empirically measure biodiversity. Each measure of biodiversity relates to a particular use of the data. For practical conservationists , measurements should include a quantification of values that are commonly shared among locally affected organisms, including humans [ clarification needed ] . For others, a more economically defensible definition should allow the ensuring of continued possibilities for both adaptation and future use by humans, assuring environmental sustainability . As a consequence, biologists argue that this measure is likely to be associated with the variety of genes. Since it cannot always be said which genes are more likely to prove beneficial, the best choice [ citation needed ] for conservation is to assure the persistence of as many genes as possible. For ecologists, this latter approach is sometimes considered too restrictive, as it prohibits ecological succession . Biodiversity is usually plotted as taxonomic richness of a geographic area, with some reference to a temporal scale. Whittaker [ 9 ] described three common metrics used to measure species-level biodiversity, encompassing attention to species richness or species evenness : More recently, two new indices have been invented. The Mean Species Abundance Index (MSA) calculates the trend in population size of a cross section of the species. It does this in line with the CBD 2010 indicator for species abundance . [ 10 ] The Biodiversity Intactness Index (BII) measures biodiversity change using abundance data on plants, fungi and animals worldwide. The BII shows how local terrestrial biodiversity responds to human pressures such as land use change and intensification. [ 11 ] Alternatively, other types of diversity may be plotted against a temporal timescale: These different types of diversity may not be independent. There is, for example, a close link between vertebrate taxonomic and ecological diversity. [ 12 ] Other authors tried to organize the measurements of biodiversity in the following way: [ 13 ] Diversity may be measured at different scales. These are some common indices used by ecologists:
https://en.wikipedia.org/wiki/Measurement_of_biodiversity
Throughput of a network can be measured using various tools available on different platforms. This page explains the theory behind what these tools set out to measure and the issues regarding these measurements. Reasons for measuring throughput in networks. People are often concerned about measuring the maximum data throughput in bits per second of a communications link or network access. A typical method of performing a measurement is to transfer a 'large' file from one system to another system and measure the time required to complete the transfer or copy of the file. The throughput is then calculated by dividing the file size by the time to get the throughput in megabits , kilobits , or bits per second. Unfortunately, the results of such an exercise will often result in the goodput which is less than the maximum theoretical data throughput, leading to people believing that their communications link is not operating correctly. In fact, there are many overheads accounted for in throughput in addition to transmission overheads, including latency , TCP Receive Window size and system limitations, which means the calculated goodput does not reflect the maximum achievable throughput. [ 1 ] The Maximum bandwidth can be calculated as follows: T h r o u g h p u t ≤ R W I N R T T {\displaystyle \mathrm {Throughput} \leq {\frac {\mathrm {RWIN} }{\mathrm {RTT} }}\,\!} where RWIN is the TCP Receive Window and RTT is the round-trip time for the path. The Max TCP Window size in the absence of TCP window scale option is 65,535 bytes . Example: Max Bandwidth = 65,535 bytes / 0.220 s = 297886.36 B/s * 8 = 2.383 Mbit /s. Over a single TCP connection between those endpoints, the tested bandwidth will be restricted to 2.376 Mbit/s even if the contracted bandwidth is greater. Bandwidth test software is used to determine the maximum bandwidth of a network or internet connection. It is typically undertaken by attempting to download or upload the maximum amount of data in a certain period of time, or a certain amount of data in the minimum amount of time. For this reason, Bandwidth tests can delay internet transmissions through the internet connection as they are undertaken, and can cause inflated data charges. The throughput of communications links is measured in bits per second (bit/s), kilobits per second (kbit/s), megabits per second (Mbit/s) and gigabits per second (Gbit/s). In this application, kilo, mega and giga are the standard S.I. prefixes indicating multiplication by 1,000 ( kilo ), 1,000,000 ( mega ), and 1,000,000,000 ( giga ). File sizes are typically measured in bytes — kilobytes , megabytes , and gigabytes being usual, where a byte is eight bits. In modern textbooks one kilobyte is defined as 1,000 byte, one megabyte as 1,000,000 byte, etc., in accordance with the 1998 International Electrotechnical Commission (IEC) standard. However, the convention adopted by Windows systems is to define 1 kilobyte is as 1,024 (or 2 10 ) bytes, which is equal to 1 kibibyte . Similarly, a file size of "1 megabyte" is 1,024 × 1,024 byte, equal to 1 mebibyte ), and "1 gigabyte" 1,024 × 1,024 × 1,024 byte = 1 gibibyte ). It is usual for people to abbreviate commonly used expressions. For file sizes, it is usual for someone to say that they have a '64 k' file (meaning 64 kilobytes), or a '100 meg' file (meaning 100 megabytes). When talking about circuit bit rates , people will interchangeably use the terms throughput , bandwidth and speed, and refer to a circuit as being a '64 k' circuit, or a '2 meg' circuit — meaning 64 kbit/s or 2 Mbit/s (see also the List of connection bandwidths ). However, a '64 k' circuit will not transmit a '64 k' file in one second. This may not be obvious to those unfamiliar with telecommunications and computing, so misunderstandings sometimes arise. In actuality, a 64 kilobyte file is 64 × 1,024 × 8 bits in size and the 64 k circuit will transmit bits at a rate of 64 × 1,000 bit/s, so the amount of time taken to transmit a 64 kilobyte file over the 64 k circuit will be at least (64 × 1,024 × 8)/(64 × 1,000) seconds, which works out to be 8.192 seconds. Some equipment can improve matters by compressing the data as it is sent. This is a feature of most analog modems and of several popular operating systems . If the 64 k file can be shrunk by compression , the time taken to transmit can be reduced. This can be done invisibly to the user, so a highly compressible file may be transmitted considerably faster than expected. As this 'invisible' compression cannot easily be disabled, it therefore follows that when measuring throughput by using files and timing the time to transmit, one should use files that cannot be compressed. Typically, this is done using a file of random data, which becomes harder to compress the closer to truly random it is. Assuming your data cannot be compressed, the 8.192 seconds to transmit a 64 kilobyte file over a 64 kilobit/s communications link is a theoretical minimum time which will not be achieved in practice. This is due to the effect of overheads which are used to format the data in an agreed manner so that both ends of a connection have a consistent view of the data. There are at least two issues that aren't immediately obvious for transmitting compressed files: [ 3 ] A common communications link used by many people is the asynchronous start-stop , or just "asynchronous", serial link. If you have an external modem attached to your home or office computer, the chances are that the connection is over an asynchronous serial connection. Its advantage is that it is simple — it can be implemented using only three wires: Send, Receive and Signal Ground (or Signal Common). In an RS-232 interface, an idle connection has a continuous negative voltage applied. A 'zero' bit is represented as a positive voltage difference with respect to the Signal Ground and a 'one' bit is a negative voltage with respect to signal ground, thus indistinguishable from the idle state. This means you need to know when a 'one' bit starts to distinguish it from idle. This is done by agreeing in advance how fast data will be transmitted over a link, then using a start bit to signal the start of a byte — this start bit will be a 'zero' bit. Stop bits are 'one' bits i.e. negative voltage. Actually, more things will have been agreed in advance — the speed of bit transmission, the number of bits per character, the parity and the number of stop bits (signifying the end of a character). So a designation of 9600-8-E-2 would be 9,600 bits per second, with eight bits per character, even parity and two stop bits. A common set-up of an asynchronous serial connection would be 9600-8-N-1 (9,600 bit/s, 8 bits per character, no parity and 1 stop bit) - a total of 10 bits transmitted to send one 8 bit character (one start bit, the 8 bits making up the byte transmitted and one stop bit). This is an overhead of 20%, so a 9,600 bit/s asynchronous serial link will not transmit data at 9600/8 bytes per second (1200 byte/s) but actually, in this case 9600/10 bytes per second (960 byte/s), which is considerably slower than expected. It can get worse. If parity is specified and we use 2 stop bits, the overhead for carrying one 8 bit character is 4 bits (one start bit, one parity bit and two stop bits) - or 50%! In this case a 9600 bit/s connection will carry 9600/12 byte/s (800 byte/s). Asynchronous serial interfaces commonly will support bit transmission speeds of up to 230.4 kbit/s. If it is set up to have no parity and one stop bit, this means the byte transmission rate is 23.04 kbyte/s. The advantage of the asynchronous serial connection is its simplicity. One disadvantage is its low efficiency in carrying data. This can be overcome by using a synchronous interface. In this type of interface, a clock signal is added on a separate wire, and the bits are transmitted in synchrony with the clock — the interface no longer has to look for the start and stop bits of each individual character — however, it is necessary to have a mechanism to ensure the sending and receiving clocks are kept in synchrony, so data is divided up into frames of multiple characters separated by known delimiters. There are three common coding schemes for framed communications — HDLC , PPP , and Ethernet When using HDLC , rather than each byte having a start, optional parity, and one or two stop bits, the bytes are gathered together into a frame . The start and end of the frame are signalled by the 'flag', and error detection is carried out by the frame check sequence. If the frame has a maximum sized address of 32 bits, a maximum sized control part of 16 bits and a maximum sized frame check sequence of 16 bits, the overhead per frame could be as high as 64 bits. If each frame carried but a single byte, the data throughput efficiency would be extremely low. However, the bytes are normally gathered together, so that even with a maximal overhead of 64 bits, frames carrying more than 24 bytes are more efficient than asynchronous serial connections. As frames can vary in size because they can have different numbers of bytes being carried as data, this means the overhead of an HDLC connection is not fixed. [ 4 ] The " point-to-point protocol " (PPP) is defined by the Internet Request For Comment documents RFC 1570, RFC 1661 and RFC 1662. With respect to the framing of packets, PPP is quite similar to HDLC, but supports both bit-oriented as well as byte-oriented ("octet-stuffed") methods of delimiting frames while maintaining data transparency. [ 5 ] Ethernet is a " local area network " (LAN) technology, which is also framed. The way the frame is electrically defined on a connection between two systems is different from the typically wide-area networking technology that uses HDLC or PPP implemented, but these details are not important for throughput calculations. Ethernet is a shared medium, so that it is not guaranteed that only the two systems that are transferring a file between themselves will have exclusive access to the connection. If several systems are attempting to communicate simultaneously, the throughput between any pair can be substantially lower than the nominal bandwidth available. [ 6 ] Dedicated point-to-point links are not the only option for many connections between systems. Frame Relay , ATM , and MPLS based services can also be used. When calculating or estimating data throughputs, the details of the frame/cell/packet format and the technology's detailed implementation need to be understood. [ 7 ] Frame Relay uses a modified HDLC format to define the frame format that carries data. [ 8 ] Asynchronous Transfer Mode (ATM) uses a radically different method of carrying data. Rather than using variable length frames or packets, data is carried in fixed size cells. Each cell is 53 bytes long, with the first 5 bytes defined as the header, and the following 48 bytes as payload. Data networking commonly requires packets of data that are larger than 48 bytes, so there is a defined adaptation process that specifies how larger packets of data should be divided up in a standard manner to be carried by the smaller cells. This process varies according to the data carried, so in ATM nomenclature, there are different ATM Adaptation Layers . The process defined for most data is named ATM Adaptation Layer No. 5 or AAL5 . Understanding throughput on ATM links requires a knowledge of which ATM adaptation layer has been used for the data being carried. [ 9 ] Multiprotocol Label Switching (MPLS) adds a standard tag or header known as a 'label' to existing packets of data. In certain situations it is possible to use MPLS in a 'stacked' manner, so that labels are added to packets that have already been labelled. Connections between MPLS systems can also be 'native', with no underlying transport protocol, or MPLS labelled packets can be carried inside frame relay or HDLC packets as payloads. Correct throughput calculations need to take such configurations into account. For example, a data packet could have two MPLS labels attached via 'label-stacking', then be placed as payload inside an HDLC frame. This generates more overhead that has to be taken into account that a single MPLS label attached to a packet which is then sent 'natively', with no underlying protocol to a receiving system. [ 10 ] Few systems transfer files and data by simply copying the contents of the file into the 'Data' field of HDLC or PPP frames — another protocol layer is used to format the data inside the 'Data' field of the HDLC or PPP frame. The most commonly used such protocol is Internet Protocol (IP), defined by RFC 791. This imposes its own overheads. Again, few systems simply copy the contents of files into IP packets, but use yet another protocol that manages the connection between two systems — TCP ( Transmission Control Protocol ), defined by RFC 1812. This adds its own overhead. Finally, a final protocol layer manages the actual data transfer process. A commonly used protocol for this is the " file transfer protocol [ 11 ]
https://en.wikipedia.org/wiki/Measuring_network_throughput
Mebendazole ( MBZ ), sold under the brand name Vermox among others, is a medication used to treat a number of parasitic worm infestations. [ 5 ] This includes ascariasis , pinworm infection , hookworm infections , guinea worm infections and hydatid disease , among others. [ 5 ] It has been used for treatment of giardiasis but is not a preferred agent. [ 6 ] [ 7 ] It is taken by mouth . [ 5 ] Mebendazole is usually well tolerated. [ 5 ] Common side effects include headache , vomiting, and ringing in the ears . [ 5 ] If used at large doses it may cause bone marrow suppression . [ 5 ] It is unclear if it is safe in pregnancy. [ 5 ] [ 2 ] Mebendazole is a broad-spectrum antihelminthic agent of the benzimidazole type. [ 5 ] Mebendazole came into use in 1971, after it was developed by Janssen Pharmaceutica in Belgium . [ 8 ] It is on the World Health Organization's List of Essential Medicines . [ 9 ] Mebendazole is available as a generic medication . [ 10 ] Mebendazole is a highly effective, broad-spectrum antihelmintic indicated for the treatment of nematode infestations, including roundworm, hookworm, whipworm, threadworm (pinworm), and the intestinal form of trichinosis prior to its spread into the tissues beyond the digestive tract. Other drugs are used to treat worm infections outside the digestive tract, as mebendazole is poorly absorbed into the bloodstream. [ 11 ] Mebendazole is used alone in those with mild to moderate infestations. It kills parasites relatively slowly, and in those with very heavy infestations, it can cause some parasites to migrate out of the digestive system, leading to appendicitis, bile duct problems, or intestinal perforation. To avoid this, heavily infested patients may be treated with piperazine , either before or instead of mebendazole. Piperazine paralyses the parasites, causing them to pass in the feces. [ 12 ] It is also used rarely in the treatment of cystic echinococcosis, also known as hydatid disease . Evidence for effectiveness for this disease, however, is poor. [ 13 ] Mebendazole and other benzimidazole antithelmetics are active against both larval and adult stages of nematodes, and in the cases of roundworm and whipworm, kill the eggs, as well. Paralysis and death of the parasites occurs slowly, and elimination in the feces may require several days. [ 11 ] Mebendazole has been shown to cause ill effects in pregnancy in animal models, and no adequate studies of its effects in human pregnancy have been conducted. [ 2 ] Whether it can be passed by breastfeeding is unknown. [ 14 ] [ 2 ] Mebendazole sometimes causes diarrhea, abdominal pain, and elevated liver enzymes. In rare cases, it has been associated with a dangerously low white blood cell count, low platelet count , and hair loss, [ 14 ] [ 15 ] with a risk of agranulocytosis in rare cases. Carbamazepine and phenytoin lower serum levels of mebendazole. Cimetidine does not appreciably raise serum mebendazole (in contrast to the similar drug albendazole ), consistent with its poor systemic absorption. [ 16 ] [ 17 ] Stevens–Johnson syndrome and the more severe toxic epidermal necrolysis can occur when mebendazole is combined with high doses of metronidazole . [ 18 ] Mebendazole works by selectively inhibiting the synthesis of microtubules via binding to the colchicine binding site of β-tubulin , thereby blocking polymerisation of tubulin dimers in intestinal cells of parasites. [ 19 ] Disruption of cytoplasmic microtubules leads to blocking the uptake of glucose and other nutrients, resulting in the gradual immobilization and eventual death of the helminths . [ 11 ] Poor absorption in the digestive tract makes mebendazole an efficient drug for treating intestinal parasitic infections with limited adverse effects. However mebendazole has an impact on mammalian cells, mostly by inhibiting polymeration of tubulin dimers, thereby disrupting essential microtubule structures such as mitotic spindle . [ 20 ] Disassembly of the mitotic spindle then leads to apoptosis mediated via dephosphorylation of Bcl-2 which allows pro-apoptotic protein Bax to dimerize and initiate programmed cell death. [ 21 ] Mebendazole is available as a generic medication . [ 10 ] Mebendazole is distributed in international markets by Johnson and Johnson and a number of generic manufacturers. [ 22 ] In the United States, mebendazole is sometimes sold at about 200 times the price of the same medication in other countries. [ 23 ] [ 24 ] [ 25 ]
https://en.wikipedia.org/wiki/Mebendazole
mecA is a gene found in bacterial cells which allows them to be resistant to antibiotics such as methicillin , penicillin and other penicillin-like antibiotics. [ 1 ] The bacteria strain most commonly known to carry mecA is methicillin-resistant Staphylococcus aureus ( MRSA ). In Staphylococcus species, mecA is spread through the staphylococcal chromosome cassette SCC mec genetic element. [ 2 ] Resistant strains cause many hospital-acquired infections . [ 3 ] mecA encodes the protein PBP2A ( penicillin-binding protein 2A), a transpeptidase that helps form the bacterial cell wall . PBP2A has a lower affinity for beta-lactam antibiotics such as methicillin and penicillin than DD -transpeptidase does, so it does not bind to the ringlike structure of penicillin-like antibiotics. This enables transpeptidase activity in the presence of beta-lactams, preventing them from inhibiting cell wall synthesis. [ 4 ] The bacteria can then replicate as normal. Methicillin resistance first emerged in hospitals in Staphylococcus aureus that was more aggressive and failed to respond to methicillin treatment. [ 5 ] The prevalence of this strain, MRSA, continued to increase, reaching up to 60% of British hospitals, and has spread throughout the world and beyond hospital settings. [ 5 ] [ 6 ] Researchers traced the source of this resistance to the mecA gene acquired through a mobile genetic element , staphylococcal cassette chromosome mec, present in all known MRSA strains. [ 7 ] On February 27, 2017, the World Health Organization (WHO) put MRSA on their list of priority bacterial resistant pathogens and made it a high priority target for further research and treatment development. [ 8 ] Successful treatment of MRSA begins with the detection of mecA , usually through polymerase chain reaction (PCR). Alternative methods include enzymatic detection PCR, which labels the PCR with enzymes detectable by immunoabsorbant assays. This takes less time and does not need gel electrophoresis , which can be costly, tedious, and unpredictable. [ 9 ] cefoxitin disc diffusion uses phenotypic resistance to test not only for methicillin resistant strains but also for low resistant strains. [ 10 ] The presence of mecA alone does not determine resistant strains; further phenotypic assays of mecA -positive strains can determine how resistant the strain is to methicillin. [ 11 ] These phenotypic assays cannot rely on the accumulation of PBP2a, the protein product of mecA , as a test for methicillin resistance, as no connection between protein amount and resistance exists. [ 12 ] mecA is on staphylococcal cassette chromosome mec , a mobile gene element from which the gene can undergo horizontal gene transfer and insert itself into the host species, which can be any species in the Staphylococcus genus. [ 13 ] This cassette is a 52 kilobase piece of DNA that contains mecA and two recombinase genes, ccrA and ccrB . [ 7 ] Proper insertion of the mecA complex into the host genome requires the recombinases. Researchers have isolated multiple genetic variants from resistant strains of S. aureus , but all variants function similarly and have the same insertion site, near the host DNA origin of replication . [ 14 ] mecA also forms a complex with two regulatory units, mecI and mecR1 . These two genes can repress mecA ; deletions or knock-outs in these genes increase resistance of S. aureus to methicillin. [ 15 ] The S. aureus strains isolated from humans either lack these regulatory elements or contain mutations in these genes that cause a loss of function of the protein products that inhibit mecA . This in turn, causes constitutive transcription of mecA . [ 16 ] This cassette chromosome can move between species. Two other Staphylococci species, S.epidermidis and S.haemolyticus, show conservation in this insertion site, not only for mecA but also for other non-essential genes the cassette chromosome can carry. [ 17 ] Penicillin, its derivatives and methicillin, and other beta-lactam antibiotics inhibits activity of the cell-wall forming penicillin-binding protein family (PBP 1, 2, 3 and 4). This disrupts the cell wall structure, causing the cytoplasm to leak and cell death. [ 18 ] However, mecA codes for PBP2a that has a lower affinity for beta-lactams, which keeps the structural integrity of the cell wall, preventing cell death. [ 18 ] Bacterial cell wall synthesis in S. aureus depends on transglycosylation to form linear polymer of sugar monomers and transpeptidation to form an interlinking peptides to strengthen the newly developed cell wall. PBPs have a transpeptidase domain, but scientists thought only monofunctional enzymes catalyze transglycosylation, yet PBP2 has domains to perform both essential processes. [ 19 ] When antibiotics enter the medium, they bind to the transpeptidation domain and inhibit PBPs from cross-linking muropeptides, therefore preventing the formation of stable cell wall. With cooperative action, PBP2a lacks the proper receptor for the antibiotics and continues transpeptidation, preventing cell wall breakdown. [ 20 ] The functionality of PBP2a depends on two structural factors on the cell wall of S. aureus. First, for PBP2a to properly fit onto the cell wall, to continue transpeptidation, it needs the proper amino acid residues, specifically a pentaglycine residue and an amidated glutamate residue. [ 21 ] Second, PBP2a has an effective transpeptidase activity but lacks the transglycosylation domain of PBP2, which builds the backbone of the cell wall with polysaccharide monomers, so PBP2a must rely on PBP2 to continue this process. [ 21 ] [ 20 ] The latter forms a therapeutic target to improve the ability of beta-lactams to prevent cell wall synthesis in resistant S. aureus . Identifying inhibitors of glycosylases involved in the cell wall synthesis and modulating their expression can resensitize these previously resistant bacteria to beta-lactam treatment. [ 22 ] For example, epicatechin gallate , a compound found in green tea, has shown signs of lowering the resistance to beta-lactams, to the point where oxacillin, which acts on PBP2 and PBP2a, effectively inhibits cell wall formation. [ 23 ] Interactions with other genes decrease resistance to beta-lactams in resistant strains of S. aureus . These gene networks are mainly involved in cell division, and cell wall synthesis and function, where there PBP2a localizes. [ 24 ] Furthermore, other PBP proteins also affect the resistance of S. aureus to antibiotics. Oxacillin resistance decreased in S. aureus strains when expression of PBP4 was inhibited but PBP2a was not. [ 25 ] mecA is acquired and transmitted through a mobile genetic element, that inserts itself into the host genome. That structure is conserved between the mecA gene product and a homologous mecA gene product in Staphylococcus sciuri . As of 2007, function for the mecA homologue in S. sciuri remains unknown, but they may be a precursor for the mecA gene found in S. aureus . [ 26 ] The structure of the protein product of this homologue is so similar that the protein can be used in S. aureus. When the mecA homologue of beta-lactam resistant S. sciuri is inserted into antibiotic sensitive S. aureus, antibiotics resistance increases. Even though the muropeptides (peptidoglycan precursors) that both species use are the same, the protein product of mecA gene of the S. sciuri can continue cell wall synthesis when a beta-lactam inhibits the PBP protein family. [ 27 ] To further understand the origin of mecA , specifically the mecA complex found on the Staphylococcal cassette chromosome, researchers used the mecA gene from S. sciuri in comparison to other Staphylococci species. Nucleotide analysis shows the sequence of mecA is almost identical to the mecA homologue found in Staphylococcus fleurettii , the most significant candidate for the origin of the mecA gene on the staphylococcal cassette chromosome. Since the genome of the S. fleurettii contains this gene, the cassette chromosome must originate from another species. [ 28 ]
https://en.wikipedia.org/wiki/MecA
Mechanical, Electrical, and Plumbing ( MEP ) refers to the installation of services which provide a functional and comfortable space for the building occupants. In residential and commercial buildings, these elements are often designed by specialized MEP engineers. MEP's design is important for planning, decision-making, accurate documentation, performance- and cost-estimation, construction, and operating/maintaining the resulting facilities. [ 1 ] MEP specifically encompasses the in-depth design and selection of these systems, as opposed to a tradesperson simply installing equipment. For example, a plumber may select and install a commercial hot water system based on common practice and regulatory codes. A team of MEP engineers will research the best design according to the principles of engineering, and supply installers with the specifications they develop. As a result, engineers working in the MEP field must understand a broad range of disciplines, including dynamics, mechanics, fluids, thermodynamics, heat transfer, chemistry, electricity, and computers. [ 2 ] As with other aspect of buildings, MEP drafting , design and documentation were traditionally done manually. Computer-aided design has some advantages over this, and often incorporates 3D modeling which is otherwise impractical. Building information modeling provides holistic design and parametric change management of the MEP design. [ 3 ] Maintaining documentation of MEP services may also require the use of a geographical information system or asset management system. The mechanical component of MEP is an important superset of HVAC services. Thus, it incorporates the control of environmental factors ( psychrometrics ), either for human comfort or for the operation of machines. Heating, cooling, ventilation and exhaustion are all key areas to consider in the mechanical planning of a building. [ 4 ] In special cases, water cooling/heating, humidity control or air filtration [ 5 ] may also be incorporated. For example, Google's data centres make extensive use of heat exchangers to cool their servers. [ 6 ] This system creates an additional overhead of 12% of initial energy consumption. This is a vast improvement from traditional active cooling units which have an overhead of 30-70%. [ 6 ] However, this novel and complicated method requires careful and expensive planning from mechanical engineers, who must work closely with the engineers designing the electrical and plumbing systems for a building. A major concern for people designing HVAC systems is the efficiency, i.e., the consumption of electricity and water. Efficiency is optimised by changing the design of the system on both large and small scales. Heat pumps [ 7 ] and evaporative cooling [ 8 ] are efficient alternatives to traditional systems, however they may be more expensive or harder to implement. The job of an MEP engineer is to compare these requirements and choose the most suitable design for the task. Electricians and plumbers usually have little to do with each other, other than keeping services out of each other's way. The introduction of mechanical systems requires the integration of the two so that plumbing may be controlled by electrics and electrics may be serviced by plumbing. Thus, the mechanical component of MEP unites the three fields. Virtually all modern buildings integrate some form of AC mains electricity for powering domestic and everyday appliances. Such systems typically run between 100 and 500 volts, however their classifications and specifications vary greatly by geographical area (see Mains electricity by country ). Mains power is typically distributed through insulated copper wire concealed in the building's subfloor, wall cavities and ceiling cavity. These cables are terminated into sockets mounted to walls, floors or ceilings. Similar techniques are used for lights ("luminaires"), however the two services are usually separated into different circuits with different protection devices at the distribution board . [ 9 ] Whilst the wiring for lighting is exclusively managed by electricians, the selection of luminaires or light fittings may be left to building owners or interior designers in some cases. Three-phase power is commonly used for industrial machines, particularly motors and high-load devices. Provision for three-phase power must be considered early in the design stage of a building because it has different regulations to domestic power supplies, and may affect aspects such as cable routes, switchboard location, large external transformers and connection from the street. [ 9 ] Advances in technology and the advent of computer networking have led to the emergence of a new facet of electrical systems incorporating data and telecommunications wiring. As of 2019, several derivative acronyms have been suggested for this area, including MEPIT (mechanical, electrical, plumbing and information technology) and MEPI (an abbreviation of MEPIT). [ 10 ] Equivalent names are "low voltage", "data", and "telecommunications" or "comms". A low voltage system used for telecommunications networking is not the same as a low voltage network . The information technology sector of electrical installations is used for computer networking, telephones, television, security systems, audio distribution, healthcare systems, robotics, and more. These services are typically installed by different tradespeople to the higher-voltage mains wiring and are often contracted out to very specific trades, e.g. security installers or audio integrators. Regulations on low voltage wiring are often less strict or less important to human safety. As a result, it is more common for this wiring to be installed or serviced by competent amateurs, despite constant attempts from the electrical industry to discourage this. Competent design of plumbing systems is necessary to prevent conflicts with other trades, and to avoid expensive rework or surplus supplies. The scope of standard residential plumbing usually covers mains pressure potable water, heated water (in conjunction with mechanical and/or electrical engineers), sewerage, stormwater, natural gas, and sometimes rainwater collection and storage. In commercial environments, these distribution systems expand to accommodate many more users, as well as the addition of other plumbing services such as hydroponics, irrigation, fuels, oxygen, vacuum/compressed air, solids transfer, and more. Plumbing systems also service air distribution/control, and therefore contribute to the mechanical part of MEP. Plumbing for HVAC systems involves the transfer of coolant, pressurized air, water, and occasionally other substances. Ducting for air transfer may also be consider plumbing, but is generally installed by different tradespeople. [ 11 ]
https://en.wikipedia.org/wiki/Mechanical,_electrical,_and_plumbing
The Mechanical Workshops Wilhelm Albrecht (MWA) ( German : Mechanische Werkstätten Wilhelm Albrecht ) were founded in 1926 by the innovator, engineer, and entrepreneur, Wilhelm Albrecht [ 1 ] in Berlin-Tempelhof . The logo he designed became an internationally known trademark for complete device systems for image-synchronous sound recording and processing in film and television studios since the 1950s. [ 2 ] Initially, the company developed and produced kits for radio receivers and supplied them to end users, subsequently – in advanced versions - to industrial radio manufacturers such as Blaupunkt. In 1936, the company moved to larger premises at Juliusstrasse in Berlin's Neukölln district. In the following years, development and manufacturing concentrated on equipment for communication technology. In 1944, the factory was partially damaged by a bomb attack. In the remaining workshops and with the inventory of materials and machinery saved after the end of the war, items for everyday’s use at that time were manufactured (e. g. tobacco cutting machines). In addition, repairs of damaged industrial equipment were carried out. In this context, Albrecht came into contact with remaining companies of the Berlin film industry. As early as 1946, MWA received an order from Berlin-based Kaudel-Film to design and manufacture an optical sound camera (LTK 1) and other devices for sound recording and processing of feature films in sync with the picture. In addition, the company developed and manufactured stationary and portable sound mixing consoles for film studios. However, Albrecht soon realized that the future of film sound recording and processing would not be the optical sound technology that had been used until then, but the magnetic sound process, which had already been experimented with in the USA. He developed the first magnetic film sound device in Europe, the so-called magnetic sound camera (MTK 1). At the beginning of 1950, MWA delivered the MTK 1 to the UFA studios in Berlin-Tempelhof where it was used for dubbing feature films and for sound recording of new productions until 1970. After a public presentation of the MTK 1, the UFA praised it as a “masterpiece of modern film equipment construction, and also in a new area of sound film technology” (see letter from the UfA dated April 4, 1950). In terms of technology, this was already the breakthrough, but an unrestricted delivery to film studios all over the world could only take place after a protracted patent dispute had ended. Numerous further developments followed, from the MTK 1 to the MR 10 travel model, also supplemented by the KT 2 camera table as well as e. g. amplifiers and sound deleting devices (de-magnetizers). The column-shaped magnetic film recorders/reproducers were a major adaptation to the needs in studios - starting in 1950 with the MB 1, followed by numerous successor models up to the MB 51, which were used in film and television studios worldwide for decades. In parallel to the constant further development of devices using magnetic sound technology - later also for the new medium of television - Albrecht even participated in significant constructions for phonograph technology . In 1956, the company was transformed into a GmbH (in English: Ltd.). Wilhelm Albrecht and his wife Helene, who had headed the commercial division since 1945, were appointed managing directors with sole authorization, engineer Günter Kieß was appointed Technical Manager. The latter held this position until 1991. He extensively documented MWA equipment and systems. In 1961, the company moved to larger premises on Maybachufer in Berlin-Neukölln. After the death of the company’s founder in 1962, his widow Helene Albrecht became general manager, a position which she held until 1974. The continuous and innovative further development of the sound devices, which were soon supplemented by picture film transports and film scanners as well as complementary control systems was groundbreaking for the changing sophisticated production processes in film and television studios and consolidated the company's position on the international market. In 1974, Margret Albrecht, daughter of Wilhelm and Helene Albrecht, took over the general management of the company. After expanding the development capacities – due to its significance supported by state funding for research and development - and enlarging the production premises, as well as optimizing the operational procedures and creating additional sales channels, the production and delivery capacities were doubled within six years. In 1980, Wilhelm Albrecht GmbH was included in the ADAC travel guide listing technical places of interest in Germany. In 1984, Helene Kunow-Albrecht and Margret Nilsson-Albrecht sold their shares to Berliner Elektro Beteiligungen, which enabled the successful flotation as Berliner Elektro Holding AG (stock company). Soon it became evident that digital technology would also largely replace the conventional analogue magnetic sound process. As a result, under the control of engineer Peter Stroetzel (general manager since 1990), the laser optical sound camera (LLK 3) for the production of optical sound negatives was developed, manufactured and, from 1996, sold to film studios and film laboratories all around the world. However, after the traditional medium film with added optical sound track was replaced by digital video technology, this market soon became saturated. In 2002, Stroetzel, who had taken over the company shares from Berliner Elektro Holding AG in 1997, applied for insolvency, the business was finally taken over by the MWA Nova GmbH. The logo, then over 75 years old, has been preserved.
https://en.wikipedia.org/wiki/Mechanical_Workshops_Wilhelm_Albrecht
Mechanical advantage is a measure of the force amplification achieved by using a tool, mechanical device or machine system. The device trades off input forces against movement to obtain a desired amplification in the output force. The model for this is the law of the lever . Machine components designed to manage forces and movement in this way are called mechanisms . [ 1 ] An ideal mechanism transmits power without adding to or subtracting from it. This means the ideal machine does not include a power source, is frictionless, and is constructed from rigid bodies that do not deflect or wear. The performance of a real system relative to this ideal is expressed in terms of efficiency factors that take into account departures from the ideal. The lever is a movable bar that pivots on a fulcrum attached to or positioned on or across a fixed point. The lever operates by applying forces at different distances from the fulcrum, or pivot. The location of the fulcrum determines a lever's class . Where a lever rotates continuously, it functions as a rotary second-class lever. The motion of the lever's end-point describes a fixed orbit, where mechanical energy can be exchanged. (see a hand-crank as an example.) In modern times, this kind of rotary leverage is widely used; see a (rotary) 2nd-class lever; see gears, pulleys or friction drive, used in a mechanical power transmission scheme. It is common for mechanical advantage to be manipulated in a 'collapsed' form, via the use of more than one gear (a gearset). In such a gearset, gears having smaller radii and less inherent mechanical advantage are used. In order to make use of non-collapsed mechanical advantage, it is necessary to use a 'true length' rotary lever. See, also, the incorporation of mechanical advantage into the design of certain types of electric motors; one design is an 'outrunner'. As the lever pivots on the fulcrum, points farther from this pivot move faster than points closer to the pivot. The power into and out of the lever is the same, so must come out the same when calculations are being done. Power is the product of force and velocity, so forces applied to points farther from the pivot must be less than when applied to points closer in. [ 1 ] If a and b are distances from the fulcrum to points A and B and if force F A applied to A is the input force and F B exerted at B is the output, the ratio of the velocities of points A and B is given by ⁠ a / b ⁠ so the ratio of the output force to the input force, or mechanical advantage, is given by This is the law of the lever , which Archimedes formulated using geometric reasoning. [ 2 ] It shows that if the distance a from the fulcrum to where the input force is applied (point A ) is greater than the distance b from fulcrum to where the output force is applied (point B ), then the lever amplifies the input force. If the distance from the fulcrum to the input force is less than from the fulcrum to the output force, then the lever reduces the input force. To Archimedes, who recognized the profound implications and practicalities of the law of the lever, has been attributed the famous claim, "Give me a place to stand and with a lever I will move the whole world." [ 3 ] The use of velocity in the static analysis of a lever is an application of the principle of virtual work . The requirement for power input to an ideal mechanism to equal power output provides a simple way to compute mechanical advantage from the input-output speed ratio of the system. The power input to a gear train with a torque T A applied to the drive pulley which rotates at an angular velocity of ω A is P=T A ω A . Because the power flow is constant, the torque T B and angular velocity ω B of the output gear must satisfy the relation which yields This shows that for an ideal mechanism the input-output speed ratio equals the mechanical advantage of the system. This applies to all mechanical systems ranging from robots to linkages . Gear teeth are designed so that the number of teeth on a gear is proportional to the radius of its pitch circle, and so that the pitch circles of meshing gears roll on each other without slipping. The speed ratio for a pair of meshing gears can be computed from ratio of the radii of the pitch circles and the ratio of the number of teeth on each gear, its gear ratio . The velocity v of the point of contact on the pitch circles is the same on both gears, and is given by where input gear A has radius r A and meshes with output gear B of radius r B , therefore, where N A is the number of teeth on the input gear and N B is the number of teeth on the output gear. The mechanical advantage of a pair of meshing gears for which the input gear has N A teeth and the output gear has N B teeth is given by This shows that if the output gear G B has more teeth than the input gear G A , then the gear train amplifies the input torque. And, if the output gear has fewer teeth than the input gear, then the gear train reduces the input torque. If the output gear of a gear train rotates more slowly than the input gear, then the gear train is called a speed reducer (Force multiplier). In this case, because the output gear must have more teeth than the input gear, the speed reducer will amplify the input torque. Mechanisms consisting of two sprockets connected by a chain, or two pulleys connected by a belt are designed to provide a specific mechanical advantage in power transmission systems. The velocity v of the chain or belt is the same when in contact with the two sprockets or pulleys: where the input sprocket or pulley A meshes with the chain or belt along the pitch radius r A and the output sprocket or pulley B meshes with this chain or belt along the pitch radius r B , therefore where N A is the number of teeth on the input sprocket and N B is the number of teeth on the output sprocket. For a toothed belt drive, the number of teeth on the sprocket can be used. For friction belt drives the pitch radius of the input and output pulleys must be used. The mechanical advantage of a pair of a chain drive or toothed belt drive with an input sprocket with N A teeth and the output sprocket has N B teeth is given by The mechanical advantage for friction belt drives is given by Chains and belts dissipate power through friction, stretch and wear, which means the power output is actually less than the power input, which means the mechanical advantage of the real system will be less than that calculated for an ideal mechanism. A chain or belt drive can lose as much as 5% of the power through the system in friction heat, deformation and wear, in which case the efficiency of the drive is 95%. Consider the 18-speed bicycle with 7 in (radius) cranks and 26 in (diameter) wheels. If the sprockets at the crank and at the rear drive wheel are the same size, then the ratio of the output force on the tire to the input force on the pedal can be calculated from the law of the lever to be Now, assume that the front sprockets have a choice of 28 and 52 teeth, and that the rear sprockets have a choice of 16 and 32 teeth. Using different combinations, we can compute the following speed ratios between the front and rear sprockets The ratio of the force driving the bicycle to the force on the pedal, which is the total mechanical advantage of the bicycle, is the product of the speed ratio (or teeth ratio of output sprocket/input sprocket) and the crank-wheel lever ratio. Notice that in every case the force on the pedals is greater than the force driving the bicycle forward (in the illustration above, the corresponding backward-directed reaction force on the ground is indicated). A block and tackle is an assembly of a rope and pulleys that is used to lift loads. A number of pulleys are assembled together to form the blocks, one that is fixed and one that moves with the load. The rope is threaded through the pulleys to provide mechanical advantage that amplifies that force applied to the rope. [ 4 ] In order to determine the mechanical advantage of a block and tackle system consider the simple case of a gun tackle, which has a single mounted, or fixed, pulley and a single movable pulley. The rope is threaded around the fixed block and falls down to the moving block where it is threaded around the pulley and brought back up to be knotted to the fixed block. Let S be the distance from the axle of the fixed block to the end of the rope, which is A where the input force is applied. Let R be the distance from the axle of the fixed block to the axle of the moving block, which is B where the load is applied. The total length of the rope L can be written as where K is the constant length of rope that passes over the pulleys and does not change as the block and tackle moves. The velocities V A and V B of the points A and B are related by the constant length of the rope, that is or The negative sign shows that the velocity of the load is opposite to the velocity of the applied force, which means as we pull down on the rope the load moves up. Let V A be positive downwards and V B be positive upwards, so this relationship can be written as the speed ratio where 2 is the number of rope sections supporting the moving block. Let F A be the input force applied at A the end of the rope, and let F B be the force at B on the moving block. Like the velocities F A is directed downwards and F B is directed upwards. For an ideal block and tackle system there is no friction in the pulleys and no deflection or wear in the rope, which means the power input by the applied force F A V A must equal the power out acting on the load F B V B , that is The ratio of the output force to the input force is the mechanical advantage of an ideal gun tackle system, This analysis generalizes to an ideal block and tackle with a moving block supported by n rope sections, This shows that the force exerted by an ideal block and tackle is n times the input force, where n is the number of sections of rope that support the moving block. Mechanical advantage that is computed using the assumption that no power is lost through deflection, friction and wear of a machine is the maximum performance that can be achieved. For this reason, it is often called the ideal mechanical advantage (IMA). In operation, deflection, friction and wear will reduce the mechanical advantage. The amount of this reduction from the ideal to the actual mechanical advantage (AMA) is defined by a factor called efficiency , a quantity which is determined by experimentation. As an example, using a block and tackle with six rope sections and a 600 lb load, the operator of an ideal system would be required to pull the rope six feet and exert 100 lb F of force to lift the load one foot. Both the ratios F out / F in and V in / V out show that the IMA is six. For the first ratio, 100 lb F of force input results in 600 lb F of force out. In an actual system, the force out would be less than 600 pounds due to friction in the pulleys. The second ratio also yields a MA of 6 in the ideal case but a smaller value in the practical scenario; it does not properly account for energy losses such as rope stretch. Subtracting those losses from the IMA or using the first ratio yields the AMA. The ideal mechanical advantage (IMA), or theoretical mechanical advantage , is the mechanical advantage of a device with the assumption that its components do not flex, there is no friction, and there is no wear. It is calculated using the physical dimensions of the device and defines the maximum performance the device can achieve. The assumptions of an ideal machine are equivalent to the requirement that the machine does not store or dissipate energy; the power into the machine thus equals the power out. Therefore, the power P is constant through the machine and force times velocity into the machine equals the force times velocity out—that is, The ideal mechanical advantage is the ratio of the force out of the machine (load) to the force into the machine (effort), or Applying the constant power relationship yields a formula for this ideal mechanical advantage in terms of the speed ratio: The speed ratio of a machine can be calculated from its physical dimensions. The assumption of constant power thus allows use of the speed ratio to determine the maximum value for the mechanical advantage. The actual mechanical advantage (AMA) is the mechanical advantage determined by physical measurement of the input and output forces. Actual mechanical advantage takes into account energy loss due to deflection, friction, and wear. The AMA of a machine is calculated as the ratio of the measured force output to the measured force input, where the input and output forces are determined experimentally. The ratio of the experimentally determined mechanical advantage to the ideal mechanical advantage is the mechanical efficiency η of the machine,
https://en.wikipedia.org/wiki/Mechanical_advantage
A simple machine that exhibits mechanical advantage is called a mechanical advantage device - e.g.: Consider lifting a weight with rope and pulleys. A rope looped through a pulley attached to a fixed spot, e.g. a barn roof rafter, and attached to the weight is called a single pulley . It has a mechanical advantage (MA) = 1 (assuming frictionless bearings in the pulley), moving no mechanical advantage (or disadvantage) however advantageous the change in direction may be. A single movable pulley has an MA of 2 (assuming frictionless bearings in the pulley). Consider a pulley attached to a weight being lifted. A rope passes around it, with one end attached to a fixed point above, e.g. a barn roof rafter, and a pulling force is applied upward to the other end with the two lengths parallel. In this situation the distance the lifter must pull the rope becomes twice the distance the weight travels, allowing the force applied to be halved. Note: if an additional pulley is used to change the direction of the rope, e.g. the person doing the work wants to stand on the ground instead of on a rafter, the mechanical advantage is not increased. By looping more ropes around more pulleys we can construct a block and tackle to continue to increase the mechanical advantage. For example, if we have two pulleys attached to the rafter, two pulleys attached to the weight, one end attached to the rafter, and someone standing on the rafter pulling the rope, we have a mechanical advantage of four. Again note: if we add another pulley so that someone may stand on the ground and pull down, we still have a mechanical advantage of four. Here are examples where the fixed point is not obvious: The theoretical mechanical advantage for a screw can be calculated using the following equation: [ 2 ] where Note that the actual mechanical advantage of a screw system is greater, as a screwdriver or other screw driving system has a mechanical advantage as well.
https://en.wikipedia.org/wiki/Mechanical_advantage_device
Mechanical alloying ( MA ) is a solid-state and powder processing technique involving repeated cold welding , fracturing, and re-welding of blended powder particles in a high-energy ball mill to produce a homogeneous material. Originally developed to produce oxide-dispersion strengthened (ODS) nickel- and iron-base superalloys for applications in the aerospace industry, [ 1 ] MA has now been shown to be capable of synthesizing a variety of equilibrium and non-equilibrium alloy phases starting from blended elemental or pre-alloyed powders. [ 2 ] The non-equilibrium phases synthesized include supersaturated solid solutions, metastable crystalline and quasicrystalline phases, nanostructures, and amorphous alloys.The method is sometimes is classified as a surface severe plastic deformation method to achieve nanomaterials. [ 3 ] Mechanical alloying is akin to metal powder processing, where metals may be mixed to produce superalloys . Mechanical alloying occurs in three steps. First, the alloy materials are combined in a ball mill and ground to a fine powder. A hot isostatic pressing (HIP) process is then applied to simultaneously compress and sinter the powder. A final heat treatment stage helps remove existing internal stresses produced during any cold compaction [ broken anchor ] which may have been used. This produces an alloy suitable for high heat turbine blades and aerospace components. In combination with powder spheroidization this technology can be used to develop rapidly new alloy powders for additive manufacturing [ 5 ] Design parameters include type of mill, milling container, milling speed, milling time, type, size, and size distribution of the grinding medium, ball-to-powder weight ratio, extent of filling the vial, milling atmosphere, process control agent, temperature of milling, and the reactivity of the species. The process of mechanical alloying involves the production of a composite powder particles by: During high-energy milling the powder particles are repeatedly flattened, cold welded, fractured and rewelded. Whenever two steel balls collide, some powder is trapped between them. Typically, around 1000 particles with an aggregate weight of about 0.2 mg are trapped during each collision. The force of the impact plastically deforms the powder particles, leading to work hardening and fracture. The new surfaces thus created enable the particles to weld together; this leads to an increase in particle size. Since in the early stages of milling, the particles are soft (if using either ductile-ductile or ductile-brittle material combination), their tendency to weld together and form large particles is high. A broad range of particle sizes develops, with some as large as three times larger than the starting particles. The composite particles at this stage have a characteristic layered structure consisting of various combinations of the starting constituents. With continued deformation particles become work hardened, and fracture by a fatigue failure mechanism and/or by the fragmentation of fragile flakes.
https://en.wikipedia.org/wiki/Mechanical_alloying
A mechanical biological treatment ( MBT ) system is a type of waste processing facility that combines a sorting facility with a form of biological treatment such as composting or anaerobic digestion . MBT plants are designed to process mixed household waste as well as commercial and industrial wastes . The terms mechanical biological treatment or mechanical biological pre-treatment relate to a group of solid waste treatment systems . These systems enable the recovery of materials contained within the mixed waste and facilitate the stabilisation of the biodegradable component of the material. [ 1 ] [ 2 ] Twenty two facilities in the UK have implemented MBT/BMT treatment processes. [ 3 ] The sorting component of the plants typically resemble a materials recovery facility . This component is either configured to recover the individual elements of the waste or produce a refuse-derived fuel that can be used for the generation of power. The components of the mixed waste stream that can be recovered include: MBT is also sometimes termed biological mechanical treatment (BMT), however this simply refers to the order of processing (i.e., the biological phase of the system precedes the mechanical sorting). MBT should not be confused with mechanical heat treatment (MHT). The "mechanical" element is usually an automated mechanical sorting stage. This either removes recyclable elements from a mixed waste stream (such as metals, plastics, glass, and paper) or processes them. It typically involves factory style conveyors , industrial magnets , eddy current separators , trommels , shredders , and other tailor made systems, or the sorting is done manually at hand picking stations. The mechanical element has a number of similarities to a materials recovery facility (MRF). [ 4 ] Some systems integrate a wet MRF to separate by density and flotation and to recover and wash the recyclable elements of the waste in a form that can be sent for recycling. MBT can alternatively process the waste to produce a high calorific fuel termed refuse derived fuel (RDF). RDF can be used in cement kilns or thermal combustion power plants and is generally made up from plastics and biodegradable organic waste. Systems which are configured to produce RDF include the Herhof and Ecodeco processes. It is a common misconception that all MBT processes produce RDF; this is not the case, and depends strictly on system configuration and suitable local markets for MBT outputs. The "biological" element refers to either: Anaerobic digestion harnesses anaerobic microorganisms to break down the biodegradable component of the waste to produce biogas and soil improver . The biogas can be used to generate electricity and heat . Biological can also refer to a composting stage. Here the organic component is broken down by naturally occurring aerobic microorganisms. They break down the waste into carbon dioxide and compost. There is no green energy produced by systems employing only composting treatment for the biodegradable waste. In the case of biodrying, the waste material undergoes a period of rapid heating through the action of aerobic microbes. During this partial composting stage the heat generated by the microbes result in rapid drying of the waste. These systems are often configured to produce a refuse-derived fuel where a dry, light material is advantageous for later transport and combustion. Some systems incorporate both anaerobic digestion and composting. This may either take the form of a full anaerobic digestion phase, followed by the maturation (composting) of the digestate. Alternatively a partial anaerobic digestion phase can be induced on water that is percolated through the raw waste, dissolving the readily available sugars, with the remaining material being sent to a windrow composting facility. By processing the biodegradable waste either by anaerobic digestion or by composting MBT technologies help to reduce the contribution of greenhouse gases to global warming . Usable wastes for this system: Possible products of this system: Further advantages: MBT systems can form an integral part of a region's waste treatment infrastructure. These systems are typically integrated with kerbside collection schemes. In the event that a refuse-derived fuel is produced as a by-product then a combustion facility would be required. This could either be an incineration facility or a gasifier. Alternatively MBT solutions can diminish the need for home separation and kerbside collection of recyclable elements of waste. This gives the ability of local authorities, municipalities and councils to reduce the use of waste vehicles on the roads and keep recycling rates high. Friends of the Earth suggests that the best environmental route for residual waste is to firstly maximise removal of remaining recyclable materials from the waste stream (such as metals, plastics and paper). The amount of waste remaining should be composted or anaerobically digested and disposed of to landfill , unless sufficiently clean to be used as compost. A report by Eunomia [ 5 ] undertook a detailed analysis of the climate impacts of different residual waste technologies. It found that an MBT process that extracts both the metals and plastics prior to landfilling is one of the best options for dealing with our residual waste, and has a lower impact than either MBT processes producing RDF for incineration or incineration of waste without MBT. Friends of the Earth does not support MBT plants that produce refuse derived fuel (RDF), and believes MBT processes should occur in small, localised treatment plants. [ 6 ]
https://en.wikipedia.org/wiki/Mechanical_biological_treatment
Mechanical brake stretch wrappers are manual stretch wrapping systems which consist of a simple structure supporting a roll of film to be stretched and a mechanical brake which acts on the film roll, creating resistance to turning and stretching the film as it is fed out. The amount of braking is often variable by means of either a knob or a lever system. The support structure is most commonly a base paired with a handle, but recently pole wrappers have been introduced providing a more ergonomic design.
https://en.wikipedia.org/wiki/Mechanical_brake_stretch_wrapper
A mechanical calculator , or calculating machine , is a mechanical device used to perform the basic operations of arithmetic automatically, or a simulation like an analog computer or a slide rule . Most mechanical calculators were comparable in size to small desktop computers and have been rendered obsolete by the advent of the electronic calculator and the digital computer . Surviving notes from Wilhelm Schickard in 1623 reveal that he designed and had built the earliest known apparatus fulfilling the widely accepted definition of a mechanical calculator (a counting machine with an automated tens-carry). His machine was composed of two sets of technologies: first an abacus made of Napier's bones , to simplify multiplications and divisions first described six years earlier in 1617, and for the mechanical part, it had a dialed pedometer to perform additions and subtractions. A study of the surviving notes shows a machine that could have jammed after a few entries on the same dial. [ 1 ] argued that it could be damaged if a carry had to be propagated over a few digits (e.g. adding 1 to 999), but further study and working replicas refute this claim. Schickard tried to build a second machine for the astronomer Johannes Kepler , but could not complete it. During the turmoil of the 30-year-war his machine was burned, Schickard died of the plague in 1635. Two decades after Schickard, in 1642, Blaise Pascal invented another mechanical calculator with better tens-carry. [ 2 ] Co-opted into his father's labour as tax collector in Rouen, Pascal designed the Pascaline to help with the large amount of tedious arithmetic required. [ 3 ] In 1672, Gottfried Leibniz started designing an entirely new machine called the Stepped Reckoner . It used a stepped drum, built by and named after him, the Leibniz wheel , was the first two-motion design, the first to use cursors (creating a memory of the first operand) and the first to have a movable carriage. Leibniz built two Stepped Reckoners, one in 1694 and one in 1706. [ 4 ] The Leibniz wheel was used in many calculating machines for 200 years, and into the 1970s with the Curta hand calculator, until the advent of the electronic calculator in the mid-1970s. Leibniz was also the first to promote the idea of a pinwheel calculator . [ 5 ] During the 18th century, several inventors in Europe were working on mechanical calculators for all four species. Philipp Matthäus Hahn, Johann Helfreich Müller and others constructed machines that were working flawless, but due to the enormous amount of manual work and high precision needed for these machines they remained singletons and stayed mostly in cabinets of couriosity of their respective rulers. Only Müller's 1783 machine was put to use tabulating lumber prices; it later came into possession of the landgrave in Darmstadt. Thomas' arithmometer , the first commercially successful machine, was manufactured in 1851; it was the first mechanical calculator strong enough and reliable enough to be used daily in an office environment. For forty years the arithmometer was the only type of mechanical calculator available for sale until the industrial production of the more successful Odhner Arithmometer in 1890. [ 6 ] The comptometer , introduced in 1887, was the first machine to use a keyboard that consisted of columns of nine keys (from 1 to 9) for each digit. The Dalton adding machine, manufactured in 1902, was the first to have a 10 key keyboard. [ 7 ] Electric motors were used on some mechanical calculators from 1901. [ 8 ] In 1961, a comptometer type machine, the Anita Mk VII from Sumlock, became the first desktop mechanical calculator to receive an all-electronic calculator engine, creating the link in between these two industries and marking the beginning of its decline. The production of mechanical calculators came to a stop in the middle of the 1970s closing an industry that had lasted for 120 years. Charles Babbage designed two kinds of mechanical calculators, which were too sophisticated to be built in his lifetime, and the dimensions of which required a steam engine to power them. The first was an automatic mechanical calculator, his difference engine , which could automatically compute and print mathematical tables. In 1855, Georg Scheutz became the first of a handful of designers to succeed at building a smaller and simpler model of his difference engine. [ 9 ] The second one was a programmable mechanical calculator, his analytical engine , which Babbage started to design in 1834; "in less than two years he had sketched out many of the salient features of the modern computer . A crucial step was the adoption of a punched card system derived from the Jacquard loom " [ 10 ] making it infinitely programmable. [ 11 ] In 1937, Howard Aiken convinced IBM to design and build the ASCC/Mark I , the first machine of its kind, based on the architecture of the analytical engine; [ 12 ] when the machine was finished some hailed it as "Babbage's dream come true". [ 13 ] The desire to economize time and mental effort in arithmetical computations, and to eliminate human liability to error , is probably as old as the science of arithmetic itself. This desire has led to the design and construction of a variety of aids to calculation, beginning with groups of small objects, such as pebbles, first used loosely, later as counters on ruled boards, and later still as beads mounted on wires fixed in a frame, as in the abacus. This instrument was probably invented by the Semitic races [ a ] and later adopted in India, whence it spread westward throughout Europe and eastward to China and Japan. After the development of the abacus, no further advances were made until John Napier devised his numbering rods, or Napier's Bones , in 1617. Various forms of the Bones appeared, some approaching the beginning of mechanical computation, but it was not until 1642 that Blaise Pascal gave us the first mechanical calculating machine in the sense that the term is used today. A short list of other precursors to the mechanical calculator must include a group of mechanical analog computers which, once set, are only modified by the continuous and repeated action of their actuators (crank handle, weight, wheel, water...). Before the common era , there are odometers and the Antikythera mechanism , a seemingly out of place , unique, geared astronomical clock , followed more than a millennium later by early mechanical clocks , geared astrolabes and followed in the 15th century by pedometers . These machines were all made of toothed gears linked by some sort of carry mechanisms. These machines always produce identical results for identical initial settings unlike a mechanical calculator where all the wheels are independent but are also linked together by the rules of arithmetic. The 17th century marked the beginning of the history of mechanical calculators, as it saw the invention of its first machines, including Pascal's calculator , in 1642. [ 3 ] [ 14 ] Blaise Pascal had invented a machine which he presented as being able to perform computations that were previously thought to be only humanly possible. [ 15 ] In a sense, Pascal's invention was premature, in that the mechanical arts in his time were not sufficiently advanced to enable his machine to be made at an economic price, with the accuracy and strength needed for reasonably long use. This difficulty was not overcome until well on into the nineteenth century, by which time also a renewed stimulus to invention was given by the need for many kinds of calculation more intricate than those considered by Pascal. The 17th century also saw the invention of some very powerful tools to aid arithmetic calculations like Napier's bones , logarithmic tables and the slide rule which, for their ease of use by scientists in multiplying and dividing, ruled over and impeded the use and development of mechanical calculators [ 17 ] until the production release of the arithmometer in the mid 19th century. In 1623 and 1624 Wilhelm Schickard , in two letters that he sent to Johannes Kepler , reported his design and construction of what he referred to as an “arithmeticum organum” (“arithmetical instrument”), which would later be described as a Rechenuhr (calculating clock). The machine was designed to assist in all the four basic functions of arithmetic (addition, subtraction, multiplication and division). Amongst its uses, Schickard suggested it would help in the laborious task of calculating astronomical tables. The machine could add and subtract six-digit numbers, and indicated an overflow of this capacity by ringing a bell. The adding machine in the base was primarily provided to assist in the difficult task of adding or multiplying two multi-digit numbers. To this end an ingenious arrangement of rotatable Napier's bones were mounted on it. It even had an additional "memory register" to record intermediate calculations. Whilst Schickard noted that the adding machine was working, his letters mention that he had asked a professional, a clockmaker named Johann Pfister, to build a finished machine. Regrettably it was destroyed in a fire either whilst still incomplete, or in any case before delivery. Schickard abandoned his project soon after. He and his entire family were wiped out in 1635 by bubonic plague during the Thirty Years' War. Schickard's machine used clock wheels which were made stronger and were therefore heavier, to prevent them from being damaged by the force of an operator input. Each digit used a display wheel, an input wheel and an intermediate wheel. During a carry transfer all these wheels meshed with the wheels of the digit receiving the carry. Blaise Pascal invented a mechanical calculator with a sophisticated carry mechanism in 1642. After three years of effort and 50 prototypes [ 19 ] he introduced his calculator to the public. He built twenty of these machines in the following ten years. [ 20 ] This machine could add and subtract two numbers directly and multiply and divide by repetition. Since, unlike Schickard's machine, the Pascaline dials could only rotate in one direction zeroing it after each calculation required the operator to dial in all 9s and then ( method of re-zeroing ) propagate a carry right through the machine. [ 21 ] This suggests that the carry mechanism would have proved itself in practice many times over. This is a testament to the quality of the Pascaline because none of the 17th and 18th century criticisms of the machine mentioned a problem with the carry mechanism and yet it was fully tested on all the machines, by their resets, all the time. [ 22 ] Pascal's invention of the calculating machine, just three hundred years ago, was made while he was a youth of nineteen. He was spurred to it by seeing the burden of arithmetical labour involved in his father's official work as supervisor of taxes at Rouen. He conceived the idea of doing the work mechanically, and developed a design appropriate for this purpose; showing herein the same combination of pure science and mechanical genius that characterized his whole life. But it was one thing to conceive and design the machine, and another to get it made and put into use. Here were needed those practical gifts that he displayed later in his inventions... In 1672, Gottfried Leibniz started working on adding direct multiplication to what he understood was the working of Pascal's calculator. However, it is doubtful that he had ever fully seen the mechanism and the method could not have worked because of the lack of reversible rotation in the mechanism. Accordingly, he eventually designed an entirely new machine called the Stepped Reckoner ; it used his Leibniz wheels , was the first two-motion calculator, the first to use cursors (creating a memory of the first operand) and the first to have a movable carriage. Leibniz built two Stepped Reckoners, one in 1694 and one in 1706. [ 4 ] Only the machine built in 1694 is known to exist; it was rediscovered at the end of the 19th century having been forgotten in an attic in the University of Göttingen . [ 4 ] In 1893, the German calculating machine inventor Arthur Burkhardt was asked to put Leibniz's machine in operating condition if possible. His report was favorable except for the sequence in the carry. [ 23 ] Leibniz had invented his namesake wheel and the principle of a two-motion calculator, but after forty years of development he wasn't able to produce a machine that was fully operational; [ 24 ] this makes Pascal's calculator the only working mechanical calculator in the 17th century. Leibniz was also the first person to describe a pinwheel calculator . [ 25 ] He once said "It is unworthy of excellent men to lose hours like slaves in the labour of calculation which could safely be relegated to anyone else if machines were used." [ 26 ] Schickard, Pascal and Leibniz were inevitably inspired by the role of clockwork which was highly celebrated in the seventeenth century. [ 27 ] However, simple-minded application of interlinked gears was insufficient for any of their purposes. Schickard introduced the use of a single toothed "mutilated gear" to enable the carry to take place. Pascal improved on that with his famous weighted sautoir. Leibniz went even further in relation to the ability to use a moveable carriage to perform multiplication more efficiently, albeit at the expense of a fully working carry mechanism. ...I devised a third which works by springs and which has a very simple design. This is the one, as I have already stated, that I used many times, hidden in the plain sight of an infinity of persons and which is still in operating order. Nevertheless, while always improving on it, I found reasons to change its design... When, several years ago, I saw for the first time an instrument which, when carried, automatically records the numbers of steps by a pedestrian, it occurred to me at once that the entire arithmetic could be subjected to a similar kind of machinery so that not only counting but also addition and subtraction, multiplication and division could be accomplished by a suitably arranged machine easily, promptly, and with sure results. The principle of the clock (input wheels and display wheels added to a clock-like mechanism) for a direct-entry calculating machine couldn't be implemented to create a fully effective calculating machine without additional innovation with the technological capabilities of the 17th century, [ 30 ] because their gears would jam when a carry had to be moved several places along the accumulator. The only 17th-century calculating clocks that have survived to this day do not have a machine-wide carry mechanism and therefore cannot be called fully effective mechanical calculators. A much more successful calculating clock was built by the Italian Giovanni Poleni in the 18th century and was a two-motion calculating clock (the numbers are inscribed first and then they are processed). The 18th century saw the first mechanical calculator that could perform a multiplication automatically; designed and built by Giovanni Poleni in 1709 and made of wood, it was the first successful calculating clock. For all the machines built in this century, division still required the operator to decide when to stop a repeated subtraction at each index, and therefore these machines were only providing a help in dividing, like an abacus . Both pinwheel calculators and Leibniz wheel calculators were built with a few unsuccessful attempts at their commercialization. Luigi Torchi invented the first direct multiplication machine in 1834. [ 54 ] This was also the second key-driven machine in the world, following that of James White (1822). [ 55 ] The mechanical calculator industry started in 1851 Thomas de Colmar released his simplified Arithmomètre , which was the first machine that could be used daily in an office environment. For 40 years, [ 56 ] the arithmometer was the only mechanical calculator available for sale and was sold all over the world. By 1890, about 2,500 arithmometers had been sold [ 57 ] plus a few hundreds more from two licensed arithmometer clone makers (Burkhardt, Germany, 1878 and Layton, UK, 1883). Felt and Tarrant, the only other competitor in true commercial production, had sold 100 comptometers in three years. [ 58 ] The 19th century also saw the designs of Charles Babbage calculating machines, first with his difference engine , started in 1822, which was the first automatic calculator since it continuously used the results of the previous operation for the next one, and second with his analytical engine , which was the first programmable calculator, using Jacquard's cards to read program and data, that he started in 1834, and which gave the blueprint of the mainframe computers built in the middle of the 20th century. [ 59 ] The cash register, invented by the American saloonkeeper James Ritty in 1879, addressed the old problems of disorganization and dishonesty in business transactions. [ 72 ] It was a pure adding machine coupled with a printer , a bell and a two-sided display that showed the paying party and the store owner, if he wanted to, the amount of money exchanged for the current transaction. The cash register was easy to use and, unlike genuine mechanical calculators, was needed and quickly adopted by a great number of businesses. "Eighty four companies sold cash registers between 1888 and 1895, only three survived for any length of time". [ 73 ] In 1890, 6 years after John Patterson started NCR Corporation , 20,000 machines had been sold by his company alone against a total of roughly 3,500 for all genuine calculators combined. [ 74 ] By 1900, NCR had built 200,000 cash registers [ 75 ] and there were more companies manufacturing them, compared to the "Thomas/Payen" arithmometer company that had just sold around 3,300 [ 76 ] and Burroughs had only sold 1,400 machines. [ 77 ] Two different classes of mechanisms had become established by this time, reciprocating and rotary. The former type of mechanism was operated typically by a limited-travel hand crank; some internal detailed operations took place on the pull, and others on the release part of a complete cycle. The illustrated 1914 machine is this type; the crank is vertical, on its right side. Later on, some of these mechanisms were operated by electric motors and reduction gearing that operated a crank and connecting rod to convert rotary motion to reciprocating. The latter type, rotary, had at least one main shaft that made one [or more] continuous revolution[s], one addition or subtraction per turn. Numerous designs, notably European calculators, had handcranks, and locks to ensure that the cranks were returned to exact positions once a turn was complete. The first half of the 20th century saw the gradual development of the mechanical calculator mechanism. The Dalton adding-listing machine introduced in 1902 was the first of its type to use only ten keys, and became the first of many different models of "10-key add-listers" manufactured by many companies. In 1948 the cylindrical Curta calculator, which was compact enough to be held in one hand, was introduced after being developed by Curt Herzstark in 1938. This was an extreme development of the stepped-gear calculating mechanism. It subtracted by adding complements; between the teeth for addition were teeth for subtraction. From the early 1900s through the 1960s, mechanical calculators dominated the desktop computing market. Major suppliers in the USA included Friden , Monroe , and SCM/Marchant . These devices were motor-driven, and had movable carriages where results of calculations were displayed by dials. Nearly all keyboards were full – each digit that could be entered had its own column of nine keys, 1..9, plus a column-clear key, permitting entry of several digits at once. (See the illustration below of a Marchant Figurematic.) One could call this parallel entry, by way of contrast with ten-key serial entry that was commonplace in mechanical adding machines, and is now universal in electronic calculators. (Nearly all Friden calculators, as well as some rotary (German) Diehls had a ten-key auxiliary keyboard for entering the multiplier when doing multiplication.) Full keyboards generally had ten columns, although some lower-cost machines had eight. Most machines made by the three companies mentioned did not print their results, although other companies, such as Olivetti , did make printing calculators. In these machines, addition and subtraction were performed in a single operation, as on a conventional adding machine, but multiplication and division were accomplished by repeated mechanical additions and subtractions. Friden made a calculator that also provided square roots , basically by doing division, but with added mechanism that automatically incremented the number in the keyboard in a systematic fashion. The last of the mechanical calculators were likely to have short-cut multiplication, and some ten-key, serial-entry types had decimal-point keys. However, decimal-point keys required significant internal added complexity, and were offered only in the last designs to be made. Handheld mechanical calculators such as the 1948 Curta continued to be used until they were displaced by electronic calculators in the 1970s. Typical European four-operation machines use the Odhner mechanism, or variations of it. This kind of machine included the Original Odhner , Brunsviga and several following imitators, starting from Triumphator, Thales, Walther, Facit up to Toshiba. Although most of these were operated by handcranks, there were motor-driven versions. Hamann calculators externally resembled pinwheel machines, but the setting lever positioned a cam that disengaged a drive pawl when the dial had moved far enough. Although Dalton introduced in 1902 first 10-key printing adding (two operations, the other being subtraction) machine, these features were not present in computing (four operations) machines for many decades. Facit-T (1932) was the first 10-key computing machine sold in large numbers. Olivetti Divisumma-14 (1948) was the first computing machine with both printer and a 10-key keyboard. Full-keyboard machines, including motor-driven ones, were also built until the 1960s. Among the major manufacturers were Mercedes-Euklid, Archimedes, and MADAS in Europe; in the USA, Friden, Marchant, and Monroe were the principal makers of rotary calculators with carriages. Reciprocating calculators (most of which were adding machines, many with integral printers) were made by Remington Rand and Burroughs, among others. All of these were key-set. Felt & Tarrant made Comptometers, as well as Victor, which were key-driven. The basic mechanism of the Friden and Monroe was a modified Leibniz wheel (better known, perhaps informally, in the USA as a "stepped drum" or "stepped reckoner"). The Friden had an elementary reversing drive between the body of the machine and the accumulator dials, so its main shaft always rotated in the same direction. The Swiss MADAS was similar. The Monroe, however, reversed direction of its main shaft to subtract. The earliest Marchants were pinwheel machines, but most of them were remarkably sophisticated rotary types. They ran at 1,300 addition cycles per minute if the [+] bar is held down. Others were limited to 600 cycles per minute, because their accumulator dials started and stopped for every cycle; Marchant dials moved at a steady and proportional speed for continuing cycles. Most Marchants had a row of nine keys on the extreme right, as shown in the photo of the Figurematic. These simply made the machine add for the number of cycles corresponding to the number on the key, and then shifted the carriage one place. Even nine add cycles took only a short time. In a Marchant, near the beginning of a cycle, the accumulator dials moved downward "into the dip", away from the openings in the cover. They engaged drive gears in the body of the machine, which rotated them at speeds proportional to the digit being fed to them, with added movement (reduced 10:1) from carries created by dials to their right. At the completion of the cycle, the dials would be misaligned like the pointers in a traditional watt-hour meter. However, as they came up out of the dip, a constant-lead disc cam realigned them by way of a (limited-travel) spur-gear differential. As well, carries for lower orders were added in by another, planetary differential. (The machine shown has 39 differentials in its [20-digit] accumulator!) In any mechanical calculator, in effect, a gear, sector, or some similar device moves the accumulator by the number of gear teeth that corresponds to the digit being added or subtracted – three teeth changes the position by a count of three. The great majority of basic calculator mechanisms move the accumulator by starting, then moving at a constant speed, and stopping. In particular, stopping is critical, because to obtain fast operation, the accumulator needs to move quickly. Variants of Geneva drives typically block overshoot (which, of course, would create wrong results). However, two different basic mechanisms, the Mercedes-Euklid and the Marchant, move the dials at speeds corresponding to the digit being added or subtracted; a [1] moves the accumulator the slowest, and a [9], the fastest. In the Mercedes-Euklid, a long slotted lever, pivoted at one end, moves nine racks ("straight gears") endwise by distances proportional to their distance from the lever's pivot. Each rack has a drive pin that is moved by the slot. The rack for [1] is closest to the pivot, of course. For each keyboard digit, a sliding selector gear, much like that in the Leibniz wheel, engages the rack that corresponds to the digit entered. Of course, the accumulator changes either on the forward or reverse stroke, but not both. This mechanism is notably simple and relatively easy to manufacture. The Marchant, however, has, for every one of its ten columns of keys, a nine-ratio "preselector transmission" with its output spur gear at the top of the machine's body; that gear engages the accumulator gearing. When one tries to work out the numbers of teeth in such a transmission, a straightforward approach leads one to consider a mechanism like that in mechanical gasoline pump registers, used to indicate the total price. However, this mechanism is seriously bulky, and utterly impractical for a calculator; 90-tooth gears are likely to be found in the gas pump. Practical gears in the computing parts of a calculator cannot have 90 teeth. They would be either too big, or too delicate. Given that nine ratios per column implies significant complexity, a Marchant contains a few hundred individual gears in all, many in its accumulator. Basically, the accumulator dial has to rotate 36 degrees (1/10 of a turn) for a [1], and 324 degrees (9/10 of a turn) for a [9], not allowing for incoming carries. At some point in the gearing, one tooth needs to pass for a [1], and nine teeth for a [9]. There is no way to develop the needed movement from a driveshaft that rotates one revolution per cycle with few gears having practical (relatively small) numbers of teeth. The Marchant, therefore, has three driveshafts to feed the little transmissions. For one cycle, they rotate 1/2, 1/4, and 1/12 of a revolution. [1] . The 1/2-turn shaft carries (for each column) gears with 12, 14, 16, and 18 teeth, corresponding to digits 6, 7, 8, and 9. The 1/4-turn shaft carries (also, each column) gears with 12, 16, and 20 teeth, for 3, 4, and 5. Digits [1] and [2] are handled by 12 and 24-tooth gears on the 1/12-revolution shaft. Practical design places the 12th-rev. shaft more distant, so the 1/4-turn shaft carries freely-rotating 24 and 12-tooth idler gears. For subtraction, the driveshafts reversed direction. In the early part of the cycle, one of five pendants moves off-center to engage the appropriate drive gear for the selected digit. Some machines had as many as 20 columns in their full keyboards. The monster in this field was the Duodecillion made by Burroughs for exhibit purposes. For sterling currency, £/s/d (and even farthings), there were variations of the basic mechanisms, in particular with different numbers of gear teeth and accumulator dial positions. To accommodate shillings and pence, extra columns were added for the tens digit[s], 10 and 20 for shillings, and 10 for pence. Of course, these functioned as radix-20 and radix-12 mechanisms. A variant of the Marchant, called the Binary-Octal Marchant, was a radix-8 (octal) machine. It was sold to check very early vacuum-tube (valve) binary computers for accuracy. (Back then, the mechanical calculator was much more reliable than a tube/valve computer.) As well, there was a twin Marchant, comprising two pinwheel Marchants with a common drive crank and reversing gearbox. [ 84 ] Twin machines were relatively rare, and apparently were used for surveying calculations. At least one triple machine was made. The Facit calculator, and one similar to it, are basically pinwheel machines, but the array of pinwheels moves sidewise, instead of the carriage. The pinwheels are biquinary; digits 1 through 4 cause the corresponding number of sliding pins to extend from the surface; digits 5 through 9 also extend a five-tooth sector as well as the same pins for 6 through 9. The keys operate cams that operate a swinging lever to first unlock the pin-positioning cam that is part of the pinwheel mechanism; further movement of the lever (by an amount determined by the key's cam) rotates the pin-positioning cam to extend the necessary number of pins. [ 85 ] Stylus-operated adders with circular slots for the stylus, and side-by -side wheels, as made by Sterling Plastics (USA), had an ingenious anti-overshoot mechanism to ensure accurate carries. Mechanical calculators continued to be sold, though in rapidly decreasing numbers, into the early 1970s, with many of the manufacturers closing down or being taken over. Comptometer type calculators were often retained for much longer to be used for adding and listing duties, especially in accounting, since a trained and skilled operator could enter all the digits of a number in one movement of the hands on a comptometer quicker than was possible serially with a 10-key electronic calculator. In fact, it was quicker to enter larger digits in two strokes using only the lower-numbered keys; for instance, a 9 would be entered as 4 followed by 5. Some key-driven calculators had keys for every column, but only 1 through 5; they were correspondingly compact. The spread of the computer rather than the simple electronic calculator put an end to the comptometer. Also, by the end of the 1970s, the slide rule had become obsolete.
https://en.wikipedia.org/wiki/Mechanical_calculator
Mechanical counters are counters built using mechanical components. They typically consist of a series of disks mounted on an axle, with the digits zero through nine marked on their edge. The right most disk moves one increment with each event. Each disk except the left-most has a protrusion that, after the completion of one revolution, moves the next disk to the left one increment. Such counters have been used as odometers for bicycles and cars and in tape recorders and fuel dispensers and to control manufacturing processes. One of the largest manufacturers was the Veeder-Root company, and their name was often used for this type of counter. [ 1 ] Mechanical counters can be made into electromechanical counters , that count electrical impulses, by adding a small solenoid . An odometer for measuring distance was first described by Vitruvius around 27 and 23 BC, although the actual inventor may have been Archimedes of Syracuse (c. 287 BC – c. 212 BC). It was based on chariot wheels turning 400 times in one Roman mile . For each revolution a pin on the axle engaged a 400 tooth cogwheel, thus turning it one complete revolution per mile. This engaged another gear with holes along the circumference, where pebbles ( calculus ) were located, that were to drop one by one into a box. The distance traveled would thus be given simply by counting the number of pebbles. [ 2 ] The odometer was also independently invented in ancient China , possibly by the profuse inventor and early scientist Zhang Heng (78 AD – 139 AD) of the Han Dynasty (202 BC–220 AD). By the 3rd century (during the Three Kingdoms Period), the Chinese had termed the device as the 'jì lĭ gŭ chē' (記里鼓車), or ' li -recording drum carriage' [ 3 ] Chinese texts of the 3rd century tell of the mechanical carriage's functions, and as one li is traversed, a mechanical-driven wooden figure strikes a drum, and when ten li is traversed, another wooden figure would strike a gong or a bell with its mechanical-operated arm. [ 3 ]
https://en.wikipedia.org/wiki/Mechanical_counter
Mechanical engineering is the study of physical machines and mechanisms that may involve force and movement. It is an engineering branch that combines engineering physics and mathematics principles with materials science , to design , analyze, manufacture, and maintain mechanical systems . [ 1 ] It is one of the oldest and broadest of the engineering branches . Mechanical engineering requires an understanding of core areas including mechanics , dynamics , thermodynamics , materials science , design , structural analysis , and electricity . In addition to these core principles, mechanical engineers use tools such as computer-aided design (CAD), computer-aided manufacturing (CAM), computer-aided engineering (CAE), and product lifecycle management to design and analyze manufacturing plants , industrial equipment and machinery , heating and cooling systems , transport systems, motor vehicles , aircraft , watercraft , robotics , medical devices , weapons , and others. [ 2 ] [ 3 ] Mechanical engineering emerged as a field during the Industrial Revolution in Europe in the 18th century; however, its development can be traced back several thousand years around the world. In the 19th century, developments in physics led to the development of mechanical engineering science. The field has continually evolved to incorporate advancements; today mechanical engineers are pursuing developments in such areas as composites , mechatronics , and nanotechnology . It also overlaps with aerospace engineering , metallurgical engineering , civil engineering , structural engineering , electrical engineering , manufacturing engineering , chemical engineering , industrial engineering , and other engineering disciplines to varying amounts. Mechanical engineers may also work in the field of biomedical engineering , specifically with biomechanics , transport phenomena , biomechatronics , bionanotechnology , and modelling of biological systems. The application of mechanical engineering can be seen in the archives of various ancient and medieval societies. The six classic simple machines were known in the ancient Near East . The wedge and the inclined plane (ramp) were known since prehistoric times. [ 4 ] Mesopotamian civilization is credited with the invention of the wheel by several, mainly old sources. [ 5 ] [ 6 ] [ 7 ] However, some recent sources either suggest that it was invented independently in both Mesopotamia and Eastern Europe or credit prehistoric Eastern Europeans with the invention of the wheel [ 8 ] [ 9 ] [ 10 ] [ 11 ] The lever mechanism first appeared around 5,000 years ago in the Near East, where it was used in a simple balance scale , [ 12 ] and to move large objects in ancient Egyptian technology . [ 13 ] The lever was also used in the shadoof water-lifting device, the first crane machine, which appeared in Mesopotamia circa 3000 BC. [ 12 ] The earliest evidence of pulleys date back to Mesopotamia in the early 2nd millennium BC. [ 14 ] The Saqiyah was developed in the Kingdom of Kush during the 4th century BC. It relied on animal power reducing the tow on the requirement of human energy. [ 15 ] Reservoirs in the form of Hafirs were developed in Kush to store water and boost irrigation. [ 16 ] Bloomeries and blast furnaces were developed during the seventh century BC in Meroe . [ 17 ] [ 18 ] [ 19 ] [ 20 ] Kushite sundials applied mathematics in the form of advanced trigonometry. [ 21 ] [ 22 ] The earliest practical water-powered machines, the water wheel and watermill , first appeared in the Persian Empire , in what are now Iraq and Iran, by the early 4th century BC. [ 23 ] In ancient Greece , the works of Archimedes (287–212 BC) influenced mechanics in the Western tradition. The geared Antikythera mechanisms was an Analog computer invented around the 2nd century BC. [ 24 ] In Roman Egypt , Heron of Alexandria (c. 10–70 AD) created the first steam-powered device ( Aeolipile ). [ 25 ] In China , Zhang Heng (78–139 AD) improved a water clock and invented a seismometer , and Ma Jun (200–265 AD) invented a chariot with differential gears. The medieval Chinese horologist and engineer Su Song (1020–1101 AD) incorporated an escapement mechanism into his astronomical clock tower two centuries before escapement devices were found in medieval European clocks. He also invented the world's first known endless power-transmitting chain drive . [ 26 ] The cotton gin was invented in India by the 6th century AD, [ 27 ] and the spinning wheel was invented in the Islamic world by the early 11th century, [ 28 ] Dual-roller gins appeared in India and China between the 12th and 14th centuries. [ 29 ] The worm gear roller gin appeared in the Indian subcontinent during the early Delhi Sultanate era of the 13th to 14th centuries. [ 30 ] During the Islamic Golden Age (7th to 15th century), Muslim inventors made remarkable contributions in the field of mechanical technology. Al-Jazari , who was one of them, wrote his famous Book of Knowledge of Ingenious Mechanical Devices in 1206 and presented many mechanical designs. In the 17th century, important breakthroughs in the foundations of mechanical engineering occurred in England and the Continent . The Dutch mathematician and physicist Christiaan Huygens invented the pendulum clock in 1657, which was the first reliable timekeeper for almost 300 years, and published a work dedicated to clock designs and the theory behind them. [ 31 ] [ 32 ] In England, Isaac Newton formulated his laws of motion and developed calculus , which would become the mathematical basis of physics. Newton was reluctant to publish his works for years, but he was finally persuaded to do so by his colleagues, such as Edmond Halley . Gottfried Wilhelm Leibniz , who earlier designed a mechanical calculator , is also credited with developing the calculus during the same time period. [ 33 ] During the early 19th century Industrial Revolution, machine tools were developed in England, Germany , and Scotland . This allowed mechanical engineering to develop as a separate field within engineering. They brought with them manufacturing machines and the engines to power them. [ 34 ] The first British professional society of mechanical engineers was formed in 1847 Institution of Mechanical Engineers , thirty years after the civil engineers formed the first such professional society Institution of Civil Engineers . [ 35 ] On the European continent , Johann von Zimmermann (1820–1901) founded the first factory for grinding machines in Chemnitz , Germany in 1848. In the United States, the American Society of Mechanical Engineers (ASME) was formed in 1880, becoming the third such professional engineering society, after the American Society of Civil Engineers (1852) and the American Institute of Mining Engineers (1871). [ 36 ] The first schools in the United States to offer an engineering education were the United States Military Academy in 1817, an institution now known as Norwich University in 1819, and Rensselaer Polytechnic Institute in 1825. Education in mechanical engineering has historically been based on a strong foundation in mathematics and science. [ 37 ] Degrees in mechanical engineering are offered at various universities worldwide. Mechanical engineering programs typically take four to five years of study depending on the place and university and result in a Bachelor of Engineering (B.Eng. or B.E.), Bachelor of Science (B.Sc. or B.S.), Bachelor of Science Engineering (B.Sc.Eng.), Bachelor of Technology (B.Tech.), Bachelor of Mechanical Engineering (B.M.E.), or Bachelor of Applied Science (B.A.Sc.) degree, in or with emphasis in mechanical engineering. In Spain, Portugal and most of South America, where neither B.S. nor B.Tech. programs have been adopted, the formal name for the degree is "Mechanical Engineer", and the course work is based on five or six years of training. In Italy the course work is based on five years of education, and training, but in order to qualify as an Engineer one has to pass a state exam at the end of the course. In Greece, the coursework is based on a five-year curriculum. [ 38 ] In the United States , most undergraduate mechanical engineering programs are accredited by the Accreditation Board for Engineering and Technology (ABET) to ensure similar course requirements and standards among universities. The ABET web site lists 302 accredited mechanical engineering programs as of 11 March 2014. [ 39 ] Mechanical engineering programs in Canada are accredited by the Canadian Engineering Accreditation Board (CEAB), [ 40 ] and most other countries offering engineering degrees have similar accreditation societies. In Australia , mechanical engineering degrees are awarded as Bachelor of Engineering (Mechanical) or similar nomenclature, although there are an increasing number of specialisations. The degree takes four years of full-time study to achieve. To ensure quality in engineering degrees, Engineers Australia accredits engineering degrees awarded by Australian universities in accordance with the global Washington Accord . Before the degree can be awarded, the student must complete at least 3 months of on the job work experience in an engineering firm. [ 41 ] Similar systems are also present in South Africa and are overseen by the Engineering Council of South Africa (ECSA). In India , to become an engineer, one needs to have an engineering degree like a B.Tech. or B.E., have a diploma in engineering, or by completing a course in an engineering trade like fitter from the Industrial Training Institute (ITIs) to receive a "ITI Trade Certificate" and also pass the All India Trade Test (AITT) with an engineering trade conducted by the National Council of Vocational Training (NCVT) by which one is awarded a "National Trade Certificate". A similar system is used in Nepal. [ 42 ] Some mechanical engineers go on to pursue a postgraduate degree such as a Master of Engineering , Master of Technology , Master of Science , Master of Engineering Management (M.Eng.Mgt. or M.E.M.), a Doctor of Philosophy in engineering (Eng.D. or Ph.D.) or an engineer's degree . The master's and engineer's degrees may or may not include research . The Doctor of Philosophy includes a significant research component and is often viewed as the entry point to academia . [ 43 ] The Engineer's degree exists at a few institutions at an intermediate level between the master's degree and the doctorate. [ citation needed ] Standards set by each country's accreditation society are intended to provide uniformity in fundamental subject material, promote competence among graduating engineers, and to maintain confidence in the engineering profession as a whole. Engineering programs in the U.S., for example, are required by ABET to show that their students can "work professionally in both thermal and mechanical systems areas." [ 44 ] The specific courses required to graduate, however, may differ from program to program. Universities and institutes of technology will often combine multiple subjects into a single class or split a subject into multiple classes, depending on the faculty available and the university's major area(s) of research. The fundamental subjects required for mechanical engineering usually include: Mechanical engineers are also expected to understand and be able to apply basic concepts from chemistry, physics, tribology , chemical engineering , civil engineering , and electrical engineering . All mechanical engineering programs include multiple semesters of mathematical classes including calculus, and advanced mathematical concepts including differential equations , partial differential equations , linear algebra , differential geometry , and statistics , among others. In addition to the core mechanical engineering curriculum, many mechanical engineering programs offer more specialized programs and classes, such as control systems , robotics, transport and logistics , cryogenics , fuel technology, automotive engineering , biomechanics , vibration, optics and others, if a separate department does not exist for these subjects. [ 47 ] Most mechanical engineering programs also require varying amounts of research or community projects to gain practical problem-solving experience. In the United States it is common for mechanical engineering students to complete one or more internships while studying, though this is not typically mandated by the university. Cooperative education is another option. Future work skills [ 48 ] research puts demand on study components that feed student's creativity and innovation. [ 49 ] Mechanical engineers research, design, develop, build, and test mechanical and thermal devices, including tools, engines, and machines. Mechanical engineers typically do the following: Mechanical engineers design and oversee the manufacturing of many products ranging from medical devices to new batteries. They also design power-producing machines such as electric generators, internal combustion engines, and steam and gas turbines as well as power-using machines, such as refrigeration and air-conditioning systems. Like other engineers, mechanical engineers use computers to help create and analyze designs, run simulations and test how a machine is likely to work. Engineers may seek license by a state, provincial, or national government. The purpose of this process is to ensure that engineers possess the necessary technical knowledge, real-world experience, and knowledge of the local legal system to practice engineering at a professional level. Once certified, the engineer is given the title of Professional Engineer United States, Canada, Japan, South Korea, Bangladesh and South Africa), Chartered Engineer (in the United Kingdom, Ireland, India and Zimbabwe), Chartered Professional Engineer (in Australia and New Zealand) or European Engineer (much of the European Union). In the U.S., to become a licensed Professional Engineer (PE), an engineer must pass the comprehensive FE (Fundamentals of Engineering) exam, work a minimum of 4 years as an Engineering Intern (EI) or Engineer-in-Training (EIT) , and pass the "Principles and Practice" or PE (Practicing Engineer or Professional Engineer) exams. The requirements and steps of this process are set forth by the National Council of Examiners for Engineering and Surveying (NCEES), composed of engineering and land surveying licensing boards representing all U.S. states and territories. In Australia (Queensland and Victoria) an engineer must be registered as a Professional Engineer within the State in which they practice, for example Registered Professional Engineer of Queensland or Victoria, RPEQ or RPEV. respectively. In the UK, current graduates require a BEng plus an appropriate master's degree or an integrated MEng degree, a minimum of 4 years post graduate on the job competency development and a peer-reviewed project report to become a Chartered Mechanical Engineer (CEng, MIMechE) through the Institution of Mechanical Engineers . CEng MIMechE can also be obtained via an examination route administered by the City and Guilds of London Institute . [ 50 ] In most developed countries, certain engineering tasks, such as the design of bridges, electric power plants, and chemical plants, must be approved by a professional engineer or a chartered engineer . "Only a licensed engineer, for instance, may prepare, sign, seal and submit engineering plans and drawings to a public authority for approval, or to seal engineering work for public and private clients." [ 51 ] This requirement can be written into state and provincial legislation, such as in the Canadian provinces, for example the Ontario or Quebec's Engineer Act. [ 52 ] In other countries, such as the UK, no such legislation exists; however, practically all certifying bodies maintain a code of ethics independent of legislation, that they expect all members to abide by or risk expulsion. [ 53 ] The total number of engineers employed in the U.S. in 2015 was roughly 1.6 million. Of these, 278,340 were mechanical engineers (17.28%), the largest discipline by size. [ 54 ] In 2012, the median annual income of mechanical engineers in the U.S. workforce was $80,580. The median income was highest when working for the government ($92,030), and lowest in education ($57,090). [ 55 ] In 2014, the total number of mechanical engineering jobs was projected to grow 5% over the next decade. [ 56 ] As of 2009, the average starting salary was $58,800 with a bachelor's degree. [ 57 ] The field of mechanical engineering can be thought of as a collection of many mechanical engineering science disciplines. Several of these subdisciplines which are typically taught at the undergraduate level are listed below, with a brief explanation and the most common application of each. Some of these subdisciplines are unique to mechanical engineering, while others are a combination of mechanical engineering and one or more other disciplines. Most work that a mechanical engineer does uses skills and techniques from several of these subdisciplines, as well as specialized subdisciplines. Specialized subdisciplines, as used in this article, are more likely to be the subject of graduate studies or on-the-job training than undergraduate research. Several specialized subdisciplines are discussed in this section. Mechanics is, in the most general sense, the study of forces and their effect upon matter . Typically, engineering mechanics is used to analyze and predict the acceleration and deformation (both elastic and plastic ) of objects under known forces (also called loads) or stresses . Subdisciplines of mechanics include Mechanical engineers typically use mechanics in the design or analysis phases of engineering. If the engineering project were the design of a vehicle, statics might be employed to design the frame of the vehicle, in order to evaluate where the stresses will be most intense. Dynamics might be used when designing the car's engine, to evaluate the forces in the pistons and cams as the engine cycles. Mechanics of materials might be used to choose appropriate materials for the frame and engine. Fluid mechanics might be used to design a ventilation system for the vehicle (see HVAC ), or to design the intake system for the engine. Mechatronics is a combination of mechanics and electronics. It is an interdisciplinary branch of mechanical engineering, electrical engineering and software engineering that is concerned with integrating electrical and mechanical engineering to create hybrid automation systems. In this way, machines can be automated through the use of electric motors , servo-mechanisms , and other electrical systems in conjunction with special software. A common example of a mechatronics system is a CD-ROM drive. Mechanical systems open and close the drive, spin the CD and move the laser, while an optical system reads the data on the CD and converts it to bits . Integrated software controls the process and communicates the contents of the CD to the computer. Robotics is the application of mechatronics to create robots, which are often used in industry to perform tasks that are dangerous, unpleasant, or repetitive. These robots may be of any shape and size, but all are preprogrammed and interact physically with the world. To create a robot, an engineer typically employs kinematics (to determine the robot's range of motion) and mechanics (to determine the stresses within the robot). Robots are used extensively in industrial automation engineering. They allow businesses to save money on labor, perform tasks that are either too dangerous or too precise for humans to perform them economically, and to ensure better quality. Many companies employ assembly lines of robots, especially in Automotive Industries and some factories are so robotized that they can run by themselves . Outside the factory, robots have been employed in bomb disposal, space exploration , and many other fields. Robots are also sold for various residential applications, from recreation to domestic applications. [ 59 ] Structural analysis is the branch of mechanical engineering (and also civil engineering) devoted to examining why and how objects fail and to fix the objects and their performance. Structural failures occur in two general modes: static failure, and fatigue failure. Static structural failure occurs when, upon being loaded (having a force applied) the object being analyzed either breaks or is deformed plastically , depending on the criterion for failure. Fatigue failure occurs when an object fails after a number of repeated loading and unloading cycles. Fatigue failure occurs because of imperfections in the object: a microscopic crack on the surface of the object, for instance, will grow slightly with each cycle (propagation) until the crack is large enough to cause ultimate failure . [ 60 ] Failure is not simply defined as when a part breaks, however; it is defined as when a part does not operate as intended. Some systems, such as the perforated top sections of some plastic bags, are designed to break. If these systems do not break, failure analysis might be employed to determine the cause. Structural analysis is often used by mechanical engineers after a failure has occurred, or when designing to prevent failure. Engineers often use online documents and books such as those published by ASM [ 61 ] to aid them in determining the type of failure and possible causes. Once theory is applied to a mechanical design, physical testing is often performed to verify calculated results. Structural analysis may be used in an office when designing parts, in the field to analyze failed parts, or in laboratories where parts might undergo controlled failure tests. Thermodynamics is an applied science used in several branches of engineering, including mechanical and chemical engineering. At its simplest, thermodynamics is the study of energy, its use and transformation through a system . [ 62 ] Typically, engineering thermodynamics is concerned with changing energy from one form to another. As an example, automotive engines convert chemical energy ( enthalpy ) from the fuel into heat, and then into mechanical work that eventually turns the wheels. Thermodynamics principles are used by mechanical engineers in the fields of heat transfer , thermofluids , and energy conversion . Mechanical engineers use thermo-science to design engines and power plants , heating, ventilation, and air-conditioning (HVAC) systems, heat exchangers , heat sinks , radiators , refrigeration , insulation , and others. [ 63 ] Drafting or technical drawing is the means by which mechanical engineers design products and create instructions for manufacturing parts. A technical drawing can be a computer model or hand-drawn schematic showing all the dimensions necessary to manufacture a part, as well as assembly notes, a list of required materials, and other pertinent information. [ 64 ] A U.S. mechanical engineer or skilled worker who creates technical drawings may be referred to as a drafter or draftsman. Drafting has historically been a two-dimensional process, but computer-aided design (CAD) programs now allow the designer to create in three dimensions. Instructions for manufacturing a part must be fed to the necessary machinery, either manually, through programmed instructions, or through the use of a computer-aided manufacturing (CAM) or combined CAD/CAM program. Optionally, an engineer may also manually manufacture a part using the technical drawings. However, with the advent of computer numerically controlled (CNC) manufacturing, parts can now be fabricated without the need for constant technician input. Manually manufactured parts generally consist of spray coatings , surface finishes, and other processes that cannot economically or practically be done by a machine. Drafting is used in nearly every subdiscipline of mechanical engineering, and by many other branches of engineering and architecture. Three-dimensional models created using CAD software are also commonly used in finite element analysis (FEA) and computational fluid dynamics (CFD). Many mechanical engineering companies, especially those in industrialized nations, have incorporated computer-aided engineering (CAE) programs into their existing design and analysis processes, including 2D and 3D solid modeling computer-aided design (CAD). This method has many benefits, including easier and more exhaustive visualization of products, the ability to create virtual assemblies of parts, and the ease of use in designing mating interfaces and tolerances. Other CAE programs commonly used by mechanical engineers include product lifecycle management (PLM) tools and analysis tools used to perform complex simulations. Analysis tools may be used to predict product response to expected loads, including fatigue life and manufacturability. These tools include finite element analysis (FEA), computational fluid dynamics (CFD), and computer-aided manufacturing (CAM). Using CAE programs, a mechanical design team can quickly and cheaply iterate the design process to develop a product that better meets cost, performance, and other constraints. No physical prototype need be created until the design nears completion, allowing hundreds or thousands of designs to be evaluated, instead of a relative few. In addition, CAE analysis programs can model complicated physical phenomena which cannot be solved by hand, such as viscoelasticity , complex contact between mating parts, or non-Newtonian flows . As mechanical engineering begins to merge with other disciplines, as seen in mechatronics , multidisciplinary design optimization (MDO) is being used with other CAE programs to automate and improve the iterative design process. MDO tools wrap around existing CAE processes, allowing product evaluation to continue even after the analyst goes home for the day. They also use sophisticated optimization algorithms to more intelligently explore possible designs, often finding better, innovative solutions to difficult multidisciplinary design problems. Mechanical engineers are constantly pushing the boundaries of what is physically possible in order to produce safer, cheaper, and more efficient machines and mechanical systems. Some technologies at the cutting edge of mechanical engineering are listed below (see also exploratory engineering ). Micron-scale mechanical components such as springs, gears, fluidic and heat transfer devices are fabricated from a variety of substrate materials such as silicon, glass and polymers like SU8 . Examples of MEMS components are the accelerometers that are used as car airbag sensors, modern cell phones, gyroscopes for precise positioning and microfluidic devices used in biomedical applications. Friction stir welding, a new type of welding , was discovered in 1991 by The Welding Institute (TWI). The innovative steady state (non-fusion) welding technique joins materials previously un-weldable, including several aluminum alloys . It plays an important role in the future construction of airplanes, potentially replacing rivets. Current uses of this technology to date include welding the seams of the aluminum main Space Shuttle external tank, Orion Crew Vehicle, Boeing Delta II and Delta IV Expendable Launch Vehicles and the SpaceX Falcon 1 rocket, armor plating for amphibious assault ships, and welding the wings and fuselage panels of the new Eclipse 500 aircraft from Eclipse Aviation among an increasingly growing pool of uses. [ 65 ] [ 66 ] [ 67 ] Composites or composite materials are a combination of materials which provide different physical characteristics than either material separately. Composite material research within mechanical engineering typically focuses on designing (and, subsequently, finding applications for) stronger or more rigid materials while attempting to reduce weight , susceptibility to corrosion, and other undesirable factors. Carbon fiber reinforced composites, for instance, have been used in such diverse applications as spacecraft and fishing rods. Mechatronics is the synergistic combination of mechanical engineering, electronic engineering , and software engineering. The discipline of mechatronics began as a way to combine mechanical principles with electrical engineering. Mechatronic concepts are used in the majority of electro-mechanical systems. [ 68 ] Typical electro-mechanical sensors used in mechatronics are strain gauges, thermocouples, and pressure transducers. At the smallest scales, mechanical engineering becomes nanotechnology—one speculative goal of which is to create a molecular assembler to build molecules and materials via mechanosynthesis . For now that goal remains within exploratory engineering . Areas of current mechanical engineering research in nanotechnology include nanofilters, [ 69 ] nanofilms , [ 70 ] and nanostructures, [ 71 ] among others. Finite Element Analysis is a computational tool used to estimate stress, strain, and deflection of solid bodies. It uses a mesh setup with user-defined sizes to measure physical quantities at a node. The more nodes there are, the higher the precision. [ 72 ] This field is not new, as the basis of Finite Element Analysis (FEA) or Finite Element Method (FEM) dates back to 1941. But the evolution of computers has made FEA/FEM a viable option for analysis of structural problems. Many commercial software applications such as NASTRAN , ANSYS , and ABAQUS are widely used in industry for research and the design of components. Some 3D modeling and CAD software packages have added FEA modules. In the recent times, cloud simulation platforms like SimScale are becoming more common. Other techniques such as finite difference method (FDM) and finite-volume method (FVM) are employed to solve problems relating heat and mass transfer, fluid flows, fluid surface interaction, etc. Biomechanics is the application of mechanical principles to biological systems, such as humans , animals , plants , organs , and cells . [ 73 ] Biomechanics also aids in creating prosthetic limbs and artificial organs for humans. [ 74 ] Biomechanics is closely related to engineering , because it often uses traditional engineering sciences to analyze biological systems. Some simple applications of Newtonian mechanics and/or materials sciences can supply correct approximations to the mechanics of many biological systems. In the past decade, reverse engineering of materials found in nature such as bone matter has gained funding in academia. The structure of bone matter is optimized for its purpose of bearing a large amount of compressive stress per unit weight. [ 75 ] The goal is to replace crude steel with bio-material for structural design. Over the past decade the Finite element method (FEM) has also entered the Biomedical sector highlighting further engineering aspects of Biomechanics. FEM has since then established itself as an alternative to in vivo surgical assessment and gained the wide acceptance of academia. The main advantage of Computational Biomechanics lies in its ability to determine the endo-anatomical response of an anatomy, without being subject to ethical restrictions. [ 76 ] This has led FE modelling to the point of becoming ubiquitous in several fields of Biomechanics while several projects have even adopted an open source philosophy (e.g. BioSpine). Computational fluid dynamics, usually abbreviated as CFD, is a branch of fluid mechanics that uses numerical methods and algorithms to solve and analyze problems that involve fluid flows. Computers are used to perform the calculations required to simulate the interaction of liquids and gases with surfaces defined by boundary conditions. [ 77 ] With high-speed supercomputers, better solutions can be achieved. Ongoing research yields software that improves the accuracy and speed of complex simulation scenarios such as turbulent flows. Initial validation of such software is performed using a wind tunnel with the final validation coming in full-scale testing, e.g. flight tests. Acoustical engineering is one of many other sub-disciplines of mechanical engineering and is the application of acoustics. Acoustical engineering is the study of Sound and Vibration . These engineers work effectively to reduce noise pollution in mechanical devices and in buildings by soundproofing or removing sources of unwanted noise. The study of acoustics can range from designing a more efficient hearing aid, microphone, headphone, or recording studio to enhancing the sound quality of an orchestra hall. Acoustical engineering also deals with the vibration of different mechanical systems. [ 78 ] Manufacturing engineering , aerospace engineering , automotive engineering and marine engineering are grouped with mechanical engineering at times. A bachelor's degree in these areas will typically have a difference of a few specialized classes.
https://en.wikipedia.org/wiki/Mechanical_engineering
Mechanical engineering technology is the application of engineering principles and technological developments for the creation of useful products and production machinery. [ 1 ] Mechanical engineering technologists are expected to apply current technologies and principles from machine and product design, production and material and manufacturing processes. Expandable specialties may include aerospace, automotive, energy, nuclear, petroleum, manufacturing, product development, and industrial design . Mechanical engineering technologists can have many different titles, including in the United States: Mechanical Engineering Technology coursework is less theoretical, and more application based than a mechanical engineering degree. This is evident through the additional laboratory coursework required for a degree. The ability to apply concepts from the chemical engineering and electrical engineering fields is important. Some university Mechanical Engineering Technology degree programs require mathematics through differential equations and statistics . Most courses involve algebra and calculus. Oftentimes, a MET graduate could get hired as an engineer; job titles may include Mechanical Engineer and Manufacturing Engineer. In the U.S. it is possible to get an associates or bachelor's degree. Individuals with a bachelor's degree in engineering technology may continue on to complete the E.I.T. (Engineer in Training) examination to eventually become Professional Engineers if the program is ABET accredited. Software tools such as Finite Element Analysis (FEA) and Computational Fluid Dynamics (CFD) are often used to analyze parts and assemblies. 3D models can be made to represent parts and assemblies with computer-aided design (CAD) software. Through the application of computer-aided manufacturing (CAM), models may also be used directly by software to create "instructions" for the manufacture of objects represented by the models, through computer numerically controlled (CNC) machining or other automated processes.
https://en.wikipedia.org/wiki/Mechanical_engineering_technology
Mechanical impedance is a measure of how much a structure resists motion when subjected to a harmonic force. It relates forces with velocities acting on a mechanical system. The mechanical impedance of a point on a structure is the ratio of the force applied at a point to the resulting velocity at that point. [ 1 ] [ 2 ] Mechanical impedance is the inverse of mechanical admittance or mobility. The mechanical impedance is a function of the frequency ω {\displaystyle \omega } of the applied force and can vary greatly over frequency. At resonance frequencies, the mechanical impedance will be lower, meaning less force is needed to cause a structure to move at a given velocity. A simple example of this is pushing a child on a swing. For the greatest swing amplitude, the frequency of the pushes must be near the resonant frequency of the system. F ( ω ) = Z ( ω ) v ( ω ) {\displaystyle \mathbf {F} (\omega )=\mathbf {Z} (\omega )\mathbf {v} (\omega )} Where, F {\displaystyle \mathbf {F} } is the force vector, v {\displaystyle \mathbf {v} } is the velocity vector, Z {\displaystyle \mathbf {Z} } is the impedance matrix and ω {\displaystyle \omega } is the angular frequency. Mechanical impedance is the ratio of a potential (e.g., force) to a flow (e.g., velocity) where the arguments of the real (or imaginary) parts of both increase linearly with time. Examples of potentials are: force, sound pressure, voltage, temperature. Examples of flows are: velocity, volume velocity, current, heat flow. Impedance is the reciprocal of mobility. If the potential and flow quantities are measured at the same point then impedance is referred as driving point impedance; otherwise, transfer impedance. This physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mechanical_impedance
A mechanical joint is a section of a machine which is used to connect one or more mechanical parts to another. Mechanical joints may be temporary or permanent; most types are designed to be disassembled. Most mechanical joints are designed to allow relative movement of these mechanical parts of the machine in one degree of freedom , and restrict movement in one or more others. [ 1 ] A pin joint, also called a revolute joint, is a one- degree-of-freedom kinematic pair . It constrains the motion of two bodies to pure rotation along a common axis. The joint doesn't allow translation, or sliding linear motion. This is usually done through a rotary bearing . It enforces a cylindrical contact area, which makes it a lower kinematic pair , also called a full joint. A prismatic joint provides a linear sliding movement between two bodies, and is often called a slider, as in the slider-crank linkage . A prismatic pair is also called as sliding pair. A prismatic joint can be formed with a polygonal cross-section to resist rotation. The relative position of two bodies connected by a prismatic joint is defined by the amount of linear slide of one relative to the other one. This one parameter movement identifies this joint as a one degree of freedom kinematic pair . [ 2 ] Prismatic joints provide single-axis sliding often found in hydraulic and pneumatic cylinders . [ 3 ] In an automobile, ball joints are spherical bearings that connect the control arms to the steering knuckles . They are used on virtually every automobile made [ 4 ] and work similarly to the ball-and-socket design of the human hip joint . [ 5 ] A ball joint consists of a bearing stud and socket enclosed in a casing; all these parts are made of steel . The bearing stud is tapered and threaded, and fits into a tapered hole in the steering knuckle. A protective encasing prevents dirt from getting into the joint assembly. Usually, this is a rubber-like boot that allows movement and expansion of lubricant . Motion-control ball joints tend to be retained with an internal spring, which helps to prevent vibration problems in the linkage. The "offset" ball joint provides means of movement in systems where thermal expansion and contraction, shock, seismic motion, and torsional motions, and forces are present. [ 6 ] This is mainly used to connect rigidly two rods which transmit motion in the axial direction, without rotation. These joints may be subjected to tensile or compressive forces along the axes of the rods. The very famous example is the joining of piston rod's extension with the connecting rod in the cross head assembly. Advantages: Application: A bolted joint is a mechanical joint which is the most popular choice for connecting two members together. It is easy to design and easy to procure parts for, making it a very popular design choice for many applications. Advantage: Disadvantage: Application:
https://en.wikipedia.org/wiki/Mechanical_joint
Mechanical load is the physical stress on a mechanical system or component [ 1 ] leading to strain . Loads can be static or dynamic . Some loads are specified as part of the design criteria of a mechanical system. Depending on the usage, some mechanical loads can be measured by an appropriate test method in a laboratory or in the field. It can be the external mechanical resistance against which a machine (such as a motor or engine), acts. [ 2 ] The load can often be expressed as a curve of force versus speed. For instance, a given car traveling on a road of a given slope presents a load which the engine must act against. Because air resistance increases with speed, the motor must put out more torque at a higher speed in order to maintain the speed. By shifting to a higher gear, one may be able to meet the requirement with a higher torque and a lower engine speed, whereas shifting to a lower gear has the opposite effect. Accelerating increases the load, whereas decelerating decreases the load. Similarly, the load on a pump depends on the head against which the pump is pumping, and on the size of the pump. Similar considerations apply to a fan . See Affinity laws .
https://en.wikipedia.org/wiki/Mechanical_load
Mechanical metamaterials are rationally designed artificial materials/structures of precision geometrical arrangements leading to unusual physical and mechanical properties. These unprecedented properties are often derived from their unique internal structures rather than the materials from which they are made. Inspiration for mechanical metamaterials design often comes from biological materials (such as honeycombs and cells), from molecular and crystalline unit cell structures as well as the artistic fields of origami and kirigami. While early mechanical metamaterials had regular repeats of simple unit cell structures, increasingly complex units and architectures are now being explored. Mechanical metamaterials can be seen as a counterpart to the rather well-known family of optical metamaterials and electromagnetic metamaterials . Mechanical properties, including elasticity, viscoelasticity, and thermoelasticity, are central to the design of mechanical metamaterials. They are often also referred to as elastic metamaterials or elastodynamic metamaterials . Their mechanical properties can be designed to have values that cannot be found in nature, such as negative stiffness, negative Poisson’s ratio, negative compressibility, and vanishing shear modulus. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] 3D printing , or additive manufacturing, has revolutionized the field in the past decade by enabling the fabrication of intricate mechanical metamaterial structures. Some of the unprecedented and unusual properties of classical mechanical metamaterials include: Poisson's ratio defines how a material expands (or contracts) transversely when being compressed longitudinally. While most natural materials have a positive Poisson's ratio (coinciding with our intuitive idea that by compressing a material, it must expand in the orthogonal direction), a family of extreme materials known as auxetic materials can exhibit Poisson's ratios below zero. Examples of these can be found in nature, or fabricated, [ 14 ] [ 15 ] and often consist of a low-volume microstructure that grants the extreme properties. Simple designs of composites possessing negative Poisson's ratio (inverted hexagonal periodicity cell) were published in 1985. [ 16 ] [ 17 ] In addition, certain origami folds such as the Miura fold and, in general, zigzag-based folds are also known to exhibit negative Poisson's ratio. [ 18 ] [ 19 ] [ 20 ] [ 21 ] Negative stiffness (NS) mechanical metamaterials are engineered structures that exhibit a counterintuitive property: as an external force is applied, the material deforms in a way that reduces the applied force rather than increasing it. This is in contrast to conventional materials that resist deformation. [ 22 ] [ 23 ] [ 24 ] [ 25 ] NS metamaterials are typically constructed from periodically arranged elements that undergo elastic instability under load. This instability leads to a negative stiffness behavior within a specific deformation range. The overall effect is a material that can absorb energy more efficiently and exhibit unique mechanical properties compared to traditional materials. These mechanical metamaterials can exhibit coefficients of thermal expansion larger than that of either constituent. [ 26 ] [ 27 ] [ 28 ] The expansion can be arbitrarily large positive or arbitrarily large negative, or zero. These materials substantially exceed the bounds for thermal expansion of a two-phase composite. They contain considerable void space. A high strength-to-density ratio mechanical metamaterial is a synthetic material engineered to possess exceptional mechanical properties relative to its weight. This is achieved through carefully designed internal microstructures, often periodic or hierarchical, which contribute to the material's overall performance. [ 29 ] [ 4 ] In a closed thermodynamic system in equilibrium, both the longitudinal and volumetric compressibility are necessarily non-negative because of stability constraints. For this reason, when tensioned, ordinary materials expand along the direction of the applied force. It has been shown, however, that metamaterials can be designed to exhibit negative compressibility transitions, during which the material undergoes contraction when tensioned (or expansion when pressured). [ 30 ] When subjected to isotropic stresses, these metamaterials also exhibit negative volumetric compressibility transitions. [ 31 ] In this class of metamaterials, the negative response is along the direction of the applied force, which distinguishes these materials from those that exhibit negative transversal response (such as in the study of negative Poisson's ratio). Mechanical metamaterials with negative effective bulk modulus exhibit intriguing and counterintuitive properties. Unlike conventional materials that compress under pressure, these materials expand. This anomalous behavior stems from their carefully engineered microstructure, which allows for internal deformation mechanisms that counteract the applied stress. Potential applications for these materials are vast. They could be employed to design acoustic or phononic metamaterials ,advanced shock absorbers, and energy dissipation systems. [ 32 ] [ 33 ] [ 34 ] [ 35 ] [ 36 ] [ 37 ] [ 38 ] [ 39 ] [ 40 ] Furthermore, their unique elastic properties may find utility in creating novel structural components with enhanced resilience and adaptability to dynamic loads. A pentamode metamaterial is an artificial three-dimensional structure which, despite being a solid, ideally behaves like a fluid. Thus, it has a finite bulk but vanishing shear modulus , or in other words it is hard to compress yet easy to deform. Speaking in a more mathematical way, pentamode metamaterials have an elasticity tensor with only one non-zero eigenvalue and five (penta) vanishing eigenvalues. Pentamode structures have been proposed theoretically by Graeme Milton and Andrej Cherkaev in 1995 [ 41 ] but have not been fabricated until early 2012. [ 42 ] According to theory, pentamode metamaterials can be used as the building blocks for materials with completely arbitrary elastic properties. [ 41 ] Anisotropic versions of pentamode structures are a candidate for transformation elastodynamics and elastodynamic cloaking. Very often Cauchy elasticity is sufficient to describe the effective behavior of mechanical metamaterials. When the unit cells of typical metamaterials are not centrosymmetric it has been shown that an effective description using chiral micropolar elasticity (or Cosserat [ 43 ] ) was required. [ 44 ] Micropolar elasticity combines the coupling of translational and rotational degrees of freedom in the static case and shows an equivalent behavior to the optical activity . In addition to the well-known unprecedented mechanical properties of mechanical metamaterials, "infinite mechanical tunability" is another crucial aspect of mechanical metamaterials. This is particularly important for structural materials as their microstructure and stiffness can be tuned to effectively achieve theoretical upper bounds for specific stiffness and strength. [ 45 ] [ 46 ] [ 47 ] While theoretical composites that achieve the same upper bound have existed for some time, [ 48 ] they have been impractical to fabricate as they require features on multiple length scales. [ 49 ] Single length scale designs are amenable to additive manufacturing , where they can enable engineered systems that maximize lightweight stiffness, strength and energy absorption. To date, most mainstream studies on mechanical metamaterials have focused on passive structures with fixed properties, lacking active sensing or feedback capabilities. [ 50 ] [ 13 ] Deep integration of advanced functionalities is a critical challenge in exploring the next generation of metamaterials. [ 51 ] Composite mechanical metamaterials could be the key to achieving this goal. However, the entire concept of composite mechanical metamaterials is still in its infancy. Obtaining programmable behavior through the interplay between material and structure in composite mechanical metamaterials enables integrating advanced functionalities into their texture beyond their mechanical properties. The “mechanical metamaterial tree of knowledge” [ 13 ] implies that chiral, lattice and negative metamaterials (e.g., negative bulk modulus or negative elastic modulus) are ripe followed by origami and cellular metamaterials. Recent research trends have been entering a space beyond merely exploring unprecedented mechanical properties. Emerging directions envisioned are sensing, energy harvesting, and actuating mechanical metamaterials.The tree of knowledge reveals that digital computing, digital data storage, and micro/nano-electromechanical systems (MEMS/NEMS) applications are one of the pillars of the mechanical metamaterials future research. Along this direction of evolution, the final target can be active mechanical metamaterials with a level of cognition. Cognitive abilities are crucial elements in a truly " intelligent mechanical metamaterials ". Similar to complex living organisms, intelligent mechanical metamaterials can potentially deploy their cognitive abilities for sensing, self-powering, and information processing to interact with the surrounding environments, optimizing their response, and creating a sense–decide–respond loop. Programmable response is an emerging direction for mechanical metamaterials beyond mechanical properties. [ 52 ] [ 53 ] [ 54 ] [ 55 ] [ 56 ] [ 57 ] [ 58 ] Electrical responsiveness is an important functionality for designing adaptive, actuating, and autonomous mechanical metamaterials. [ 59 ] [ 60 ] For example, research ideas have been opened by active and adaptive mechanical metamaterials that design electrical materials into the microstructural units of metamaterials to autonomously convert mechanical-strain input into electrical-signal output. [ 50 ] [ 61 ] Integrating functional materials and mechanical design is an emerging research area to explore responsive mechanical metamaterials. [ 50 ] Recent studies explore new classes of mechanical metamaterials that can response to different excitation types such acoustic, [ 62 ] thermophotovoltaic [ 63 ] and magnetic. [ 64 ] Recent studies have explored the integration of sensing and energy harvesting functionalities into the fabric of mechanical metamaterials. Meta-tribomaterials [ 65 ] [ 66 ] proposed in 2021 are a new class of multifunctional composite mechanical metamaterials with intrinsic sensing and energy harvesting functionalities. These material systems are composed of finely tailored and topologically different triboelectric microstructures. Meta-tribomaterials can serve as nanogenerators and sensing media to directly collect information about its operating environment. They naturally inherit the enhanced mechanical properties offered by classical mechanical metamaterials. Under mechanical excitations, meta-tribomaterials generate electrical signals which can be used for active sensing and empowering sensors and embedded electronics. [ 65 ] Electronic mechanical metamaterials [ 67 ] are active mechanical metamaterials with digital computing and information storage capabilities. They have built the foundation for a new scientific field of meta-mechanotronics (mechanical metamaterial electronics) proposed in 2023. [ 67 ] These material systems are created via integrating mechanical metamaterials, digital electronics and nano energy harvesting (e.g. triboelectric, piezoelectric, pyroelectric) technologies. Electronic mechanical metamaterials hold the potential to function as digital logic gates, paving the way for the development of mechanical metamaterial computers (MMCs) that could complement traditional electronic systems. [ 67 ] Such computing metamaterial systems can be particularly useful under extreme loads and harsh environments (e.g. high pressure, high/low temperature and radiation exposure) where traditional semiconductor electronics cannot maintain their designed logical functions.
https://en.wikipedia.org/wiki/Mechanical_metamaterial
A mechanical network is an abstract interconnection of mechanical elements along the lines of an electrical circuit diagram . Elements include rigid bodies , springs , dampers , transmissions , and actuators . The symbols from left to right are: stiffness element (e.g. spring), mass (rigid body), mechanical resistance (e.g. damper), force generator, velocity generator. The symbols for generators depend on which mechanical–electrical analogy is being used. The symbols shown relate to the impedance analogy . In the mobility analogy the symbols are reversed, being respectively velocity and force generators. This article about a mechanical engineering topic is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mechanical_network
The failure or fracture of a product or component as a result of a single event is known as mechanical overload . It is a common failure mode . The terms are used in forensic engineering and structural engineering when analysing product failure. Failure may occur because either the product is weaker than expected owing to a stress concentration , or the applied load is greater than expected and exceeds the normal tensile strength , shear strength or compressive strength of the product. This article about a mechanical engineering topic is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mechanical_overload
The mechanical paradox is an apparatus for studying physical paradoxes . It consists of a trapezoidal veneered wooden frame with two brass rails, and a pair of brass cones joined at their bases by a wooden disk which rests on the rails. When the double cone is placed at the low end of the frame, it automatically starts to roll upward, giving the impression of escaping the universal law of the gravitational force . The device's unintuitive behavior is due to the fact that the natural motion of bodies depends on that of their center of gravity , which has a natural tendency to descend. Since the rails diverge, the center of gravity of the double cone—when placed on the rotation axis at its maximum diameter—does not rise when the entire body seems to be moving upward; rather, the center is shifting downward. In its travel, the resting-points of the double cone on the rails converge toward its two apexes. As a result, the distance of the center of gravity from the horizontal plane decreases as the double cone rises. An example of the instrument is held in the Lorraine collections of the Museo Galileo in Florence , Italy. It was first published by William Leybourne in 1694. Mara Miniati, ed. (1991). Museo di storia della scienza: catalogo . Firenze: Giunti. pp. 292, board n. 6. ISBN 88-09-20036-5 . Paolo Brenni, ed. (1993). Catalogue of mechanical instruments . Firenze: Giunti. pp. 28, board n. 4. ISBN 88-09-20317-8 . "Object description - Museo Galileo" .
https://en.wikipedia.org/wiki/Mechanical_paradox
Mechanical plating , also known as peen plating , mechanical deposition , or impact plating , is a plating process that imparts the coating by cold welding fine metal particles to a workpiece. Mechanical galvanization is the same process, but applies to coatings that are thicker than 0.001 in (0.025 mm). [ 1 ] It is commonly used to overcome hydrogen embrittlement problems. Commonly plated workpieces include nails , screws , nuts , washers , stampings , springs , clips , and sintered iron components. [ 2 ] [ 3 ] The process involves tumbling the workpieces with a mixture of water, metal powder, media, and additives. Common coating materials are zinc , cadmium , tin , copper , and aluminium . [ 4 ] Invented by the Tainton Company in the 1950s, it was further developed by the 3M company. [ 5 ] The process begins with a descaling and removing soil from the workpiece. This can be done in the tumbler or in a separate cleaning system. After cleaning, the parts are prepared by combining them with water, medium, and a surface conditioner. The surface conditioner lightly coats the workpiece in copper , while the medium removes any residual mill scale or oxides . Finally, accelerators, promoters and metal powder are added to the mix. The accelerators and promoters provide the proper chemical environment for the plating to occur, such as the maintenance of a pH level of 1 to 2 to prevent oxidation and promote adhesion. The medium that is already in the mixture cold welds the metal powder to the workpiece through impacts that are induced by the tumbling action of the tumbler. At this point the surface finish is typically matte to a semi-bright finish, however the finish can be improved with a water polish . The time required for the above process is approximately 50 minutes. [ 1 ] [ 4 ] For some thinly coated workpieces a chromate passivation is necessary. Finally, the workpiece, whether passivated or not, is dried. [ 4 ] The medium material is usually soda lime glass or a ceramic . It is usually spherical in form, but angular shapes are also used. For plating, medium usage is usually 1 part medium for every workpiece, but for galvanization the ratio is 2:1. However, various sized media are used in each batch with a typical batch consisting of 50% 4–5 in (100–130 mm) sized beads, 25% 2–2.5 in (51–64 mm) sized beads, and 25% 1–1.25 in (25–32 mm) sized beads. The smaller media are omitted when the workpiece has a cavity that the medium can get caught in, such as a fastener's recessed head. Note that the medium is reused many times. [ 1 ] [ 4 ] This process works better if the workpieces' surface finish is slightly rough. [ 1 ] The most important piece of equipment in the process is the tumbler . It is constructed of steel or stainless steel and lined with an acid and abrasion resistant material, such as neoprene , polypropylene , and polybutylene . The barrel sizes range from 0.04–1.13 m 3 (1.4–39.9 cu ft), however the working volume is only 25 to 35% of the total volume. For most plating applications the tumbler is rotated at 60 RPM, however it can vary. If the speed is too fast then lumpy deposits will form on the workpieces, but if the speed is too slow then the metal powder will not deposit onto the workpiece. The separator separates the coated workpieces from the medium after coating. It can be as simple as a screen with water nozzles or as complicated as a vibratory system with magnetic separators. A medium handling machine then takes the separated medium and transports it to a storage tank for reuse. [ 4 ] The separated workpieces are then taken to a dryer to remove any moisture. Usually centrifugal dryers are used, however oven are used for larger parts or loads. [ 4 ] The greatest advantage of the process is its ability to overcome hydrogen embrittlement problems, which is important for workpieces that have a hardness greater than HRC 40. Note that there still is some embrittlement of the workpiece. [ 2 ] While this process does not cause problems with hydrogen embrittlement, and electroplating does, it still offers equivalent corrosion protection. There is a great cost savings in using mechanical plating over electroplating on hardened workpieces, because the electroplating processes requires a pre- and post-plating operation to overcome hydrogen embrittlement problems. Moreover, because mechanical plating occurs at room temperature there is no tempering of hardened workpieces. [ 4 ] Another advantage is that mechanical plating evenly coats all surfaces and features, unlike electroplating which has issues plating recesses. Mechanical plating can evenly coat up to 75 μm thick. For thicker plating mechanical plating is especially cost advantageous versus electroplating, because the cycle time does not increase much for the thicker plating, unlike electroplating. [ 4 ] One of the disadvantages is the processes size limitations. Workpieces heavier than 1 lb (0.45 kg) can be damaged by the process, while flat lightweight workpieces tend to stick together so they are not properly plated. [ 1 ]
https://en.wikipedia.org/wiki/Mechanical_plating
A mechanical powder press is a machine press designed to compress powders, commonly used in the production of technical ceramics . The function principles of the mechanic press machines differ in how to ensure the upper punch–main movement by cams, spindles-and friction drives, eccentric , knuckle-joints or by the round table principle, independent if the die-or lower punch movement is realized by cams- or eccentric systems or other mechanically or hydraulically combined systems. The executions of auxiliary movements are also not decisive for a term-classification. These auxiliary movements can also base on pneumatic and hydraulic principles. In comparison to hydraulic press machines the maximum compaction forces of mechanical powder presses are limited and are placed in the range >/= 5000 kN. For the requirements of wet-and dry pressing techniques in the field of Technical Ceramics cams-, eccentric-, knuckle joint- as well as round table presses have proved and tested, whereas cam-presses especially used for wet-press-techniques of pourable materials. The range of compaction force of mechanical presses for products of the Technical Ceramics is < 2500 kN, what is caused from the less density of the ceramic materials. Normally the upper punch-, lower punch- and die systems of mechanical presses don’t work on base of multi-subdivided punches. Adaptable cam-executions, which can be made suitable individually to the concerned products, will meet especially the requirements of wet-press-technique of pourable compounds, while applications of eccentric-, knuckle-joint and round table presses are particularly related on the dry-press-technique of compounds with gliding characteristics. The sinusoidal movement of eccentric presses offers advantages especially for bigger strokes < 100 strokes per minute and the distorted sinusoidal movement of the knuckle joint is especially used for products, which need a longer time for deairing in the phase of compaction and a longer time for decompression after the bottom dead centre. For this kind of compaction technology the strokes are limited < 35 strokes per minute, but in contrast to that the compaction speed of round table presses for small and simple shaped products are especially high and quantities < 30,000 pcs per minute can be realized. The unit of product–technology–press machine / tooling is characterizing the context between the ceramic compaction technology and the design principles of the mechanical presses. Distinctions of the die- withdrawal- and the ejection technology of the uniaxial dry- and wet-press- technique in the technical ceramics influence the approachable quality of the compactions in depending on the geometric shape of the products and in depending on the selected type of press. Because the achievement of a speed-proportional compaction relation with pure mechanical presses is very difficult to realize and high mechanical investments are necessary, more and more combined mechanic-hydraulic function principles on base of subdivided punches will win a higher signification, because the requirements of the high-accuracy-pressing can be met better. The design principles of particularly mechanical powder presses, explained in a dissertation of Dr.B.Froherz offer some base confrontation of advantages and disadvantages related to technology and presses especially for tool designers.
https://en.wikipedia.org/wiki/Mechanical_powder_press
Mechanical screening , often just called screening , is the practice of taking granulated or crushed ore material and separating it into multiple grades by particle size . This practice occurs in a variety of industries such as mining and mineral processing , agriculture, pharmaceutical, food, plastics, and recycling. A method of separating solid particles according to size alone is called screening. Screening falls under two general categories: dry screening, and wet screening. From these categories, screening separates a flow of material into grades, these grades are then either further processed to an intermediary product or a finished product. Additionally, the machines can be categorized into a moving screen and static screen machines, as well as by whether the screens are horizontal or inclined. The mining and mineral processing industry uses screening for a variety of processing applications. For example, after mining the minerals, the material is transported to a primary crusher. Before crushing large boulder are scalped on a shaker with 0.25 in (6.4 mm) thick shielding screening. Further down stream after crushing the material can pass through screens with openings or slots that continue to become smaller. Finally, screening is used to make a final separation to produce saleable products based on a grade or a size range. A screening machine consist of a drive that induces vibration, a screen media that causes particle separation, and a deck which holds the screen media and the drive and is the mode of transport for the vibration. There are physical factors that makes screening practical. For example, vibration, g force , bed density, and material shape all facilitate the rate or cut. Electrostatic forces can also hinder screening efficiency in way of water attraction causing sticking or plugging, or very dry material generate a charge that causes it to attract to the screen itself. As with any industrial process there is a group of terms that identify and define what screening is. Terms like blinding, contamination, frequency, amplitude, and others describe the basic characteristics of screening, and those characteristics in turn shape the overall method of dry or wet screening. In addition, the way a deck is vibrated differentiates screens. Different types of motion have their advantages and disadvantages. In addition media types also have their different properties that lead to advantages and disadvantages. Finally, there are issues and problems associated with screening. Screen tearing, contamination, blinding, and dampening all affect screening efficiency. Like any mechanical and physical entity there are scientific, industrial, and layman terminology. The following is a partial list of terms that are associated with mechanical screening. There are a number of types of mechanical screening equipment that cause segregation. These types are based on the motion of the machine through its motor drive. An improvement on vibration, vibratory, and linear screeners, a tumbler screener uses elliptical action which aids in screening of even very fine material. As like panning for gold, the fine particles tend to stay towards the center and the larger go to the outside. It allows for segregation and unloads the screen surface so that it can effectively do its job. With the addition of multiple decks and ball cleaning decks, even difficult products can be screened at high capacity to very fine separations. [ 8 ] Circle-throw vibrating equipment is a shaker or a series of shakers as to where the drive causes the whole structure to move. The structure extends to a maximum throw or length and then contracts to a base state. A pattern of springs are situated below the structure to where there is vibration and shock absorption as the structure returns to the base state. This type of equipment is used for very large particles, sizes that range from pebble size on up to boulder size material. It is also designed for high volume output. As a scalper, this shaker will allow oversize material to pass over and fall into a crusher such a cone crusher, jaw crusher, or hammer mill. The material that passes the screen by-passes the crusher and is conveyed and combined with the crush material. Also this equipment is used in washing processes, as material passes under spray bars, finer material and foreign material is washed through the screen. This is one example of wet screening. High-frequency vibrating screening equipment is a shaker whose frame is fixed and the drive vibrates only the screen cloth. High frequency vibration equipment is for particles that are in this particle size range of an 1/8 in (3 mm) down to a +150 mesh. Traditional shaker screeners have a difficult time making separations at sizes like 44 microns. At the same time, other high energy sieves like the Elcan Industries' advanced screening technology allow for much finer separations down to as fine as 10um and 5um, respectively. These shakers usually make a secondary cut for further processing or make a finished product cut. These shakers are usually set at a steep angle relative to the horizontal level plane. Angles range from 25 to 45 degrees relative to the horizontal level plane. This type of equipment has an eccentric drive or weights that causes the shaker to travel in an orbital path. The material rolls over the screen and falls with the induction of gravity and directional shifts. Rubber balls and trays provide an additional mechanical means to cause the material to fall through. The balls also provide a throwing action for the material to find an open slot to fall through. The shaker is set a shallow angle relative to the horizontal level plane. Usually, no more than 2 to 5 degrees relative to the horizontal level plane. These types of shakers are used for very clean cuts. Generally, a final material cut will not contain any oversize or any fines contamination. These shakers are designed for the highest attainable quality at the cost of a reduced feed rate. Trommel screens have a rotating drum on a shallow angle with screen panels around the diameter of the drum. The feed material always sits at the bottom of the drum and, as the drum rotates, always comes into contact with clean screen. The oversize travels to the end of the drum as it does not pass through the screen, while the undersize passes through the screen into a launder below. There are many ways to install screen media into a screen box deck (shaker deck). Also, the type of attachment system has an influence on the dimensions of the media. Tensioned screen cloth is typically 4 feet by the width or the length of the screening machine depending on whether the deck is side or end tensioned. Screen cloth for tensioned decks can be made with hooks and are attached with clamp rails bolted on both sides of the screen box. When the clamp rail bolts are tightened, the cloth is tensioned or even stretched in the case of some types of self-cleaning screen media. To ensure that the center of the cloth does not tap repeatedly on the deck due to the vibrating shaker and that the cloth stays tensioned, support bars are positioned at different heights on the deck to create a crown curve from hook to hook on the cloth. [ 9 ] Tensioned screen cloth is available in various materials: stainless steel, high carbon steel and oil tempered steel wires, as well as moulded rubber or polyurethane and hybrid screens (a self-cleaning screen cloth made of rubber or polyurethane and metal wires). Commonly, vibratory-type screening equipment employs rigid, circular sieve frames to which woven wire mesh is attached. Conventional methods of producing tensioned meshed screens has given way in recent years to bonding, whereby the mesh is no longer tensioned and trapped between a sieve frame body and clamping ring; instead, developments in modern adhesive technologies has allowed the industry to adopt high strength structural adhesives to bond tensioned mesh directly to frames. [ 10 ] Modular screen media is typically 1 foot large by 1 or 2 feet long [ 11 ] (4 feet long for ISEPREN WS 85 [ 12 ] ) steel reinforced polyurethane or rubber panels. They are installed on a flat deck (no crown) that normally has a larger surface than a tensioned deck. This larger surface design compensates for the fact that rubber and polyurethane modular screen media offers less open area than wire cloth. Over the years, numerous ways have been developed to attach modular panels to the screen deck stringers (girders). [ 13 ] Some of these attachment systems have been or are currently patented. [ 14 ] Self-cleaning screen media is also available on this modular system. [ 15 ] There are several types of screen media manufactured with different types of material that use the two common types of screen media attachment systems, tensioned and modular. Woven wire cloth, typically produced from stainless steel, is commonly employed as a filtration medium for sieving in a wide range of industries. Most often woven with a plain weave, or a twill weave for the lightest of meshes, apertures can be produced from a few microns upwards (e.g. 25 microns), employing wires with diameters from as little as 25 microns. A twill weave allows a mesh to be woven when the wire diameter is too thick in proportion to the aperture. Other, less commonplace, weaves, such as Dutch/Hollander, allow the production of meshes that are stronger and/or having smaller apertures. Today wire cloth is woven to strict international standards, e.g. ISO1944:1999, [ 16 ] which dictates acceptable tolerance regarding nominal mesh count and blemishes. The nominal mesh count, to which mesh is generally defined is a measure of the number of openings per lineal inch, determined by counting the number of openings from the centre of one wire to the centre of another wire one lineal inch away. [ 17 ] For example, a 2 mesh woven with a wire of 1.6mm wire diameter has an aperture of 11.1mm (see picture below of a 2 mesh with an intermediate crimp). The formula for calculating the aperture of a mesh, with a known mesh count and wire diameter, is as follows: a = ( 25.4 b ) − c {\displaystyle a=\left({\frac {25.4}{b}}\right)-c} where a = aperture, b = mesh count and c = wire diameter. Other calculations regarding woven wire cloth/mesh can be made including weight and open area determination. [ 18 ] Of note, wire diameters are often referred to by their standard wire gauge (swg); e.g. a 1.6mm wire is a 16 swg. Traditionally, screen cloth was made with metal wires woven with a loom. [ 19 ] [ 20 ] [ 21 ] Today, woven cloth is still widely used primarily because they are less expensive than other types of screen media. Over the years, different weaving techniques have been developed; either to increase the open area percentage or add wear-life. Slotted opening woven cloth [ 22 ] is used where product shape is not a priority and where users need a higher open area percentage. Flat-top woven cloth [ 23 ] is used when the consumer wants to increase wear-life. On regular woven wire, the crimps (knuckles on woven wires) wear out faster than the rest of the cloth resulting in premature breakage. On flat-top woven wire, the cloth wears out equally until half of the wire diameter is worn, resulting in a longer wear life. Unfortunately flat-top woven wire cloth is not widely used because of the lack of crimps that causes a pronounced reduction of passing fines resulting in premature wear of con crushers. On a crushing and screening plant, punch plates or perforated plates [ 24 ] are mostly used on scalper vibrating screens, after raw products pass on grizzly bars. [ 25 ] Most likely installed on a tensioned deck, punch plates offer excellent wear life for high-impact and high material flow applications. Synthetic screen media is used where wear life is an issue. Large producers such as mines or huge quarries use them to reduce the frequency of having to stop the plant for screen deck maintenance. Rubber is also used as a very resistant high-impact screen media material used on the top deck of a scalper screen. To compete with rubber screen media fabrication, polyurethane manufacturers developed screen media with lower Shore Hardness. To compete with self-cleaning screen media that is still primarily available in tensioned cloth, synthetic screen media manufacturers also developed membrane screen panels, slotted opening panels and diamond opening panels. Due to the 7-degree demoulding angle, polyurethane screen media users can experience granulometry changes of product during the wear life of the panel. [ 26 ] Self-cleaning screen media was initially engineered to resolve screen cloth blinding, clogging and pegging problems. The idea was to place crimped wires side by side on a flat surface, creating openings and then, in some way, holding them together over the support bars (crown bars or bucker bars). This would allow the wires to be free to vibrate between the support bars, preventing blinding, clogging and pegging of the cloth. Initially, crimped longitudinal wires on self-cleaning cloth were held together over support bars with woven wire. [ 27 ] In the 50s, some manufacturers started to cover the woven cross wires with caulking or rubber to prevent premature wear of the crimps (knuckles on woven wires). One of the pioneer products in this category was ONDAP GOMME made by the French manufacturer Giron. [ 28 ] During the mid 90s, Major Wire Industries Ltd., a Quebec manufacturer, developed a “hybrid” self-cleaning screen cloth called Flex-Mat, without woven cross wires. [ 29 ] In this product, the crimped longitudinal wires are held in place by polyurethane strips. Rather than locking (impeding) vibration over the support bars due to woven cross wires, polyurethane strips reduce vibration of longitudinal wires over the support bars, thus allowing vibration from hook to hook. [ 30 ] Major Wire quickly started to promote this product as a high-performance screen that helped producers screen more in-specification material for less cost and not simply a problem solver. [ 31 ] They claimed that the independent vibrating wires helped produce more product compared to a woven wire cloth with the same opening (aperture) and wire diameter. This higher throughput would be a direct result of the higher vibration frequency of each independent wire of the screen cloth (calculated in hertz ) compared to the shaker vibration (calculated in RPM ), accelerating the stratification of the material bed. Another benefit that helped the throughput increase is that hybrid self-cleaning screen media offered a better open area percentage than woven wire screen media. Due to its flat surface (no knuckles), hybrid self-cleaning screen media can use a smaller wire diameter for the same aperture than woven wire and still lasts as long, resulting in a greater opening percentage.
https://en.wikipedia.org/wiki/Mechanical_screening
In classical mechanics , a branch overlapping in physics and applied mathematics , mechanical similarity occurs when the potential energy is a homogeneous function of the positions of the particles, with the result that the trajectories of the particles in the system are geometrically similar paths, differing in size but retaining shape. Consider a system of any number of particles and assume that the interaction energy between any pair of particles has the form where r is the distance between the two particles. In such a case the solutions to the equations of motion are a series of geometrically similar paths, and the times of motion t at corresponding points on the paths are related to the linear size l of the path by This classical mechanics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mechanical_similarity
In engineering , a mechanical singularity is a position or configuration of a mechanism or a machine where the subsequent behaviour cannot be predicted, or the forces or other physical quantities involved become infinite or nondeterministic . When the underlying engineering equations of a mechanism or machine are evaluated at the singular configuration (if any exists), then those equations exhibit mathematical singularity . Examples of mechanical singularities are gimbal lock and in static mechanical analysis, an under-constrained system. There are three types of singularities that can be found in mechanisms: direct-kinematics singularities, inverse-kinematics singularities, and combined singularities. These singularities occur when one or both Jacobian matrices of the mechanisms becomes singular of rank-deficient. [ 1 ] The relationship between the input and output velocities of the mechanism are defined by the following general equation: A x ˙ + B q ˙ = 0 {\displaystyle {\textbf {A}}{\dot {\textbf {x}}}+{\textbf {B}}{\dot {\textbf {q}}}={\textbf {0}}} where x ˙ {\displaystyle {\dot {\textbf {x}}}} is the output velocities, q ˙ {\displaystyle {\dot {\textbf {q}}}} is the input velocities, A {\displaystyle {\textbf {A}}} is the direct-kinematics Jacobians, and B {\displaystyle {\textbf {B}}} is the inverse-kinematics Jacobian. This first kind of singularities occurs when: det ( B ) = 0 {\displaystyle \det({\textbf {B}})=0} This second kind of singularities occurs when: det ( A ) = 0 {\displaystyle \det({\textbf {A}})=0} This kind of singularities occurs when for a particular configuration, both A {\displaystyle {\textbf {A}}} and B {\displaystyle {\textbf {B}}} become singular simultaneously. This article about a mechanical engineering topic is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mechanical_singularity
Mechanical testing covers a wide range of tests, which can be divided broadly into two types: There exists a large number of tests, many of which are standardized, to determine the various mechanical properties of materials. In general, such tests set out to obtain geometry-independent properties; i.e. those intrinsic to the bulk material. In practice this is not always feasible, since even in tensile tests, certain properties can be influenced by specimen size and/or geometry. Here is a listing of some of the most common tests: [ 2 ]
https://en.wikipedia.org/wiki/Mechanical_testing
In physics , a mechanical wave is a wave that is an oscillation of matter , and therefore transfers energy through a material medium . [ 1 ] (Vacuum is, from classical perspective, a non-material medium, where electromagnetic waves propagate.) While waves can move over long distances, the movement of the medium of transmission—the material—is limited. Therefore, the oscillating material does not move far from its initial equilibrium position. Mechanical waves can be produced only in media which possess elasticity and inertia . There are three types of mechanical waves: transverse waves , longitudinal waves , and surface waves . Some of the most common examples of mechanical waves are water waves, sound waves, and seismic waves . Like all waves, mechanical waves transport energy . This energy propagates in the same direction as the wave. A wave requires an initial energy input; once this initial energy is added, the wave travels through the medium until all its energy is transferred. In contrast, electromagnetic waves require no medium, but can still travel through one. A transverse wave is the form of a wave in which particles of medium vibrate about their mean position perpendicular to the direction of the motion of the wave. Longitudinal waves cause the medium to vibrate parallel to the direction of the wave. It consists of multiple compressions and rarefactions. The rarefaction is the farthest distance apart in the longitudinal wave and the compression is the closest distance together. The speed of the longitudinal wave is increased in higher index of refraction, due to the closer proximity of the atoms in the medium that is being compressed. Sound is an example of a longitudinal wave. This type of wave travels along the surface or interface between two media. An example of a surface wave would be waves in a pool, or in an ocean, lake, or any other type of water body. There are two types of surface waves, namely Rayleigh waves and Love waves . Rayleigh waves, also known as ground roll , are waves that travel as ripples with motion similar to those of waves on the surface of water. Such waves are much slower than body waves , at roughly 90% of the velocity of bulk waves [ clarify ] for a typical homogeneous elastic medium. Rayleigh waves have energy losses only in two dimensions and are hence more destructive in earthquakes than conventional bulk waves, such as P-waves and S-waves , which lose energy in all three directions. A Love wave is a surface wave having horizontal waves that are shear or transverse to the direction of propagation. They usually travel slightly faster than Rayleigh waves, at about 90% of the body wave velocity, and have the largest amplitude.
https://en.wikipedia.org/wiki/Mechanical_wave
Mechanically aided scrubbers are a form of pollution control technology. This type of technology is a part of the group of air pollution controls collectively referred to as wet scrubbers . In addition to using liquid sprays or the exhaust stream, scrubbing systems can use motors to supply energy. The motor drives a rotor or paddles which, in turn, generate water droplets for gas and particle collection. Systems designed in this manner have the advantage of requiring less space than other scrubbers, but their overall power requirements tend to be higher than other scrubbers of equivalent efficiency. Significant power losses occur in driving the rotor. Therefore, not all the power used is expended for gas–liquid contact. There are fewer mechanically aided scrubber designs available than liquid- and gas-phase contacting collector designs. Two are more common: centrifugal fan scrubbers and mechanically induced spray scrubbers . A centrifugal-fan scrubber can serve as both an air mover and a collection device. Figure 1 shows such a system, where water is sprayed onto the fan blades concurrently with the moving exhaust gas. Some gaseous pollutants and particles are initially removed as they pass over the liquid sprays. The liquid droplets then impact on the blades to create smaller droplets for additional collection targets. Collection can also take place on the liquid film that forms on the fan blades. The rotating blades force the liquid and collected particles off the blades. The liquid droplets separate from the gas stream because of their centrifugal motion . Centrifugal-fan collectors are the most compact of the wet scrubbers since the fan and collector comprise a combined unit. No internal pressure loss occurs across the scrubber, but a power loss equivalent to a pressure drop of 10.2 to 15.2 cm (4 to 6 in) of water occurs because the blower efficiency is low. Another mechanically aided scrubber, the induced-spray , consists of a whirling rotor submerged in a pool of liquid. The whirling rotor produces a fine droplet spray. By moving the process gas through the spray, particles and gaseous pollutants can subsequently be collected. Figure 2 shows an induced-spray scrubber that uses a vertical-spray rotor. Mechanically aided scrubbers are capable of high collection efficiencies for particles with diameters of 1 μm or greater. Achieving these usually requires a greater energy input than those of other scrubbers operating at similar efficiencies. In mechanically aided scrubbers, the majority of particle collection occurs in the liquid droplets formed by the rotating blades or rotor. Mechanically aided scrubbers are generally not used for gas absorption . The contact time between the gas and liquid phases is very short, limiting absorption. For gas removal, several other scrubbing systems provide much better removal per unit of energy consumed. As with almost any device, the addition of moving parts leads to an increase in potential maintenance problems. Mechanically aided scrubbers have higher maintenance costs than other wet collector systems. The moving parts are particularly susceptible to corrosion and fouling . In addition, rotating parts are subject to vibration-induced fatigue or wear, causing them to become unbalanced. Corrosion-resistant materials for these scrubbers are very expensive; therefore, these devices are not used in applications where corrosion or sticky materials could cause problems. Mechanically aided scrubbers have been used to control exhaust streams containing particulate matter . They have the advantage of being smaller than most other scrubbing systems, since the fan is incorporated into the scrubber. In addition, they operate with low liquid-to-gas ratios . Their disadvantages include their generally high maintenance requirements, low absorption efficiency, and high operating costs. The performance characteristics of mechanically aided scrubbers are given in Table 1 . [ 1 ] Note: These devices are used mainly for particle collection, though can also remove gaseous pollutants from the exhaust stream.
https://en.wikipedia.org/wiki/Mechanically_aided_scrubber
Polymer gradient materials (PGM) are a class of polymers with gradually changing mechanical properties along a defined direction creating an anisotropic material. These materials can be defined based upon the direction and the steepness of the gradient used and can display gradient or graded transitions. [ 1 ] A wide range of methods can be used to create these gradients including gradients in reinforcing fillers, cross-link density or porosity to name a few. 3D printing has been used to combine several of these methods in one manufacturing technique. [ 2 ] These materials can be inspired by nature where mechanical gradients are used commonly to improve interfaces between two dissimilar surfaces. When two materials that have different moduli are connected together in a bilayer, this can create a weak junction, whereas a mechanical gradient can reduce the stress and strain of the connection. In contrast, a butt joint that gives a junction between materials with little to no gradient has been shown to be weaker than the homogenous components. [ 3 ] Mechanically gradient polymers are not manufactured as extensively as can be found in nature, and there can also be unintended gradients during the manufacturing process. Materials are not always completely uniform despite the intentions of the manufacturer, and these unintended gradients may weaken the material rather than improve. Therefore, mechanical gradients must be properly applied to the particular application to prevent introduction of instabilities. [ 4 ] Reinforcing fillers such as carbon nanotubes that have high mechanical moduli have been used commonly to create polymer composites with high strength and toughness. [ 5 ] Since the modulus and filler amount are linked, by varying the amount of filler across the polymer the modulus will similarly change. [ 6 ] Additionally, since long nanofillers create anisotropic moduli, if the direction of the nanofiller could be modified along the length of the polymer, the modulus gradient could also be tuned in this manner. A common approach to increasing the mechanical strength of polymers includes changing the crosslinking density of the polymer. Crosslinks connect the polymer chains creating a web that resists deformation. Therefore, increasing the crosslinking density in a section of a polymer will increase the modulus in this location. This can be used to create a mechanical gradient if the crosslinking density changes across the polymer. A common approach to achieving this is using a photopolymerization process which with changes in UV exposure you can change the degree of crosslinking or polymerization in the area exposed. [ 7 ] In a similar manner, the amount of initiator or crosslinker could be varied across the sample creating a similar effect. Porosity can be used to decrease the modulus of a polymer as seen in polymer foams, and as seen in bone, a change in porosity and therefore density can also be used to create a mechanical gradient along its cross-section. [ 8 ] There are many examples in nature where soft tissue and hard surfaces are connected by a mechanical gradient to improve the fracture and impact resistance. Examples include mussels that connect to hard rocks by the mussel byssus which connects back to the soft muscle of the foot. [ 9 ] A more extreme example is the squid beak which has an extremely hard tip required to kill and dismember its prey which connects back to the soft flesh of the body of the squid . Without the mechanically gradient in the beak, the squid would be unable to withstand the high impacts despite its hardness since it would break off from the body at the junction between the materials. [ 10 ] As mentioned above, many systems in nature incorporate mechanical gradients, and similarly for biomedical implants these gradients can be useful. Many implants are stiff and can cause damage to the surrounding tissues due to this difference in stiffness. This is a problem for instance in microelectrodes implanted into the brain which is extremely soft. The damage caused can create a buildup of fibrous tissue which can then interfere with the signal between the electrode and the brain. [ 11 ] Similarly, in knee and hip implants, there is a need for high integration between the strong bone and the cartilage and tissue. Otherwise problems such as stress shielding can occur where the bone degenerates due to the implant having too strong of a modulus. [ 12 ]
https://en.wikipedia.org/wiki/Mechanically_gradient_polymers
In chemistry , mechanically interlocked molecular architectures ( MIMAs ) are molecules that are connected as a consequence of their topology . This connection of molecules is analogous to keys on a keychain loop. The keys are not directly connected to the keychain loop but they cannot be separated without breaking the loop. On the molecular level, the interlocked molecules cannot be separated without the breaking of the covalent bonds that comprise the conjoined molecules; this is referred to as a mechanical bond . Examples of mechanically interlocked molecular architectures include catenanes , rotaxanes , molecular knots , and molecular Borromean rings . Work in this area was recognized with the 2016 Nobel Prize in Chemistry to Bernard L. Feringa , Jean-Pierre Sauvage , and J. Fraser Stoddart . [ 1 ] [ 2 ] [ 3 ] [ 4 ] The synthesis of such entangled architectures has been made efficient by combining supramolecular chemistry with traditional covalent synthesis, however mechanically interlocked molecular architectures have properties that differ from both " supramolecular assemblies " and "covalently bonded molecules". The terminology "mechanical bond" has been coined to describe the connection between the components of mechanically interlocked molecular architectures. Although research into mechanically interlocked molecular architectures is primarily focused on artificial compounds, many examples have been found in biological systems including: cystine knots , cyclotides or lasso-peptides such as microcin J25 which are proteins , and a variety of peptides . Residual topology [ 5 ] is a descriptive stereochemical term to classify a number of intertwined and interlocked molecules, which cannot be disentangled in an experiment without breaking of covalent bonds , while the strict rules of mathematical topology allow such a disentanglement. Examples of such molecules are rotaxanes , catenanes with covalently linked rings (so-called pretzelanes ), and open knots (pseudoknots) which are abundant in proteins . The term "residual topology" was suggested on account of a striking similarity of these compounds to the well-established topologically nontrivial species, such as catenanes and knotanes (molecular knots). The idea of residual topological isomerism introduces a handy scheme of modifying the molecular graphs and generalizes former efforts of systemization of mechanically bound and bridged molecules. Experimentally the first examples of mechanically interlocked molecular architectures appeared in the 1960s with catenanes being synthesized by Wasserman and Schill and rotaxanes by Harrison and Harrison. The chemistry of MIMAs came of age when Sauvage pioneered their synthesis using templating methods. [ 6 ] In the early 1990s the usefulness and even the existence of MIMAs were challenged. The latter concern was addressed by X ray crystallographer and structural chemist David Williams. Two postdoctoral researchers who took on the challenge of producing [5]catenane (olympiadane) pushed the boundaries of the complexity of MIMAs that could be synthesized their success was confirmed in 1996 by a solid‐state structure analysis conducted by David Williams. [ 7 ] The introduction of a mechanical bond alters the chemistry of the sub components of rotaxanes and catenanes. Steric hindrance of reactive functionalities is increased and the strength of non-covalent interactions between the components are altered. [ 8 ] The strength of non-covalent interactions in a mechanically interlocked molecular architecture increases as compared to the non-mechanically bonded analogues. This increased strength is demonstrated by the necessity of harsher conditions to remove a metal template ion from catenanes as opposed to their non-mechanically bonded analogues. This effect is referred to as the "catenand effect". [ 9 ] [ 10 ] The augmented non-covalent interactions in interlocked systems compared to non-interlocked systems has found utility in the strong and selective binding of a range of charged species, enabling the development of interlocked systems for the extraction of a range of salts. [ 11 ] This increase in strength of non-covalent interactions is attributed to the loss of degrees of freedom upon the formation of a mechanical bond. The increase in strength of non-covalent interactions is more pronounced on smaller interlocked systems, where more degrees of freedom are lost, as compared to larger mechanically interlocked systems where the change in degrees of freedom is lower. Therefore, if the ring in a rotaxane is made smaller the strength of non-covalent interactions increases, the same effect is observed if the thread is made smaller as well. [ 12 ] The mechanical bond can reduce the kinetic reactivity of the products, this is ascribed to the increased steric hindrance. Because of this effect hydrogenation of an alkene on the thread of a rotaxane is significantly slower as compared to the equivalent non interlocked thread. [ 13 ] This effect has allowed for the isolation of otherwise reactive intermediates. The ability to alter reactivity without altering covalent structure has led to MIMAs being investigated for a number of technological applications. The ability for a mechanical bond to reduce reactivity and hence prevent unwanted reactions has been exploited in a number of areas. One of the earliest applications was in the protection of organic dyes from environmental degradation .
https://en.wikipedia.org/wiki/Mechanically_interlocked_molecular_architectures
In thermodynamics , a mechanically isolated system is a system that is mechanically constrained to disallow deformations, so that it cannot perform any work on its environment. It may however, exchange heat across the system boundary. For a simple system, mechanical isolation is equivalent to a state of constant volume and any process which occurs in such a simple system is said to be isochoric . [ 1 ] The opposite of a mechanically isolated system is a mechanically open system, [ citation needed ] which allows the transfer of mechanical energy. For a simple system, a mechanically open boundary is one that is allowed to move under pressure differences between the two sides of the boundary. At mechanical equilibrium , the pressures on both sides of a mechanically open boundary are equal, but only a mechanically isolating boundary can support pressure differences. This thermodynamics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mechanically_isolated_system
Mechanically stimulated gas emission (MSGE) is a complex phenomenon embracing various physical and chemical processes occurring on the surface and in the bulk of a solid under applied mechanical stress and resulting in emission of gases . MSGE is a part of a more general phenomenon of mechanically stimulated neutral emission. [ 1 ] MSGE experiments are often performed in ultra-high vacuum . The specific characteristics of MSGE as compared with MSNE is that the emitted neutral particles are limited to gas molecules. MSGE is opposite to Mechanically Stimulated Gas Absorption that usually occurs under fretting corrosion of metals, exposure to gases at high pressures, etc. There are three main sources of MSGE: [ 2 ] [ 3 ] [ 4 ] [ 5 ] Generally, for producing MSGE, the mechanical action on the solid can be of any type including tension, compression, torsion, shearing, rubbing, fretting, rolling, indentation, etc. In previous studies carried out by various groups it was found that MSGE is associated mainly with plastic deformation, fracture, wear and other irreversible modifications of a solid. [ 8 ] [ 9 ] Under elastic deformation MSGE is almost negligible and only was observed just below elastic limit due to possible microplastic deformation. In accordance to the main sources, the emitted gases usually contain hydrogen (source type IIa), argon (for coatings obtained using PVD in Ar plasma - source type IIb), methane (source type III), water (source type I and/or III), carbon mono- and dioxide (source type I/III). The knowledge on the mechanisms of MSGE is still vague. On the basis of the experimental findings it was speculated that the following processes can be related with MSGE: Thermal effect seems to be irrelevant to the gas emission under light load conditions. [ 10 ] Emerging character of this interdisciplinary branch of science is reflected by a lack of established terminology. There are different terms and definitions used by different authors depending on the main approach used (chemical, physical, mechanical, vacuum science, etc.), specific gas emission mechanism (desorption, emanation, emission, etc.) and type of mechanical activation (friction, traction, etc.): Desorption (tribodesorption, fractodesorption, etc.) refers to release of gases dissolved in the bulk and adsorbed on the surface. Therefore, desorption is only one of the contributing processes to MSGE. Outgassing is a technical term usually utilized in vacuum science. Thus, the term "gas emission" embraces various processes, reflects the physical nature of this complex phenomenon and is preferable for use in scientific publications. Due to low emission rate experiments should be performed in ultrahigh vacuum ( UHV ). In some studies the materials were previously doped with tritium . MSGE rate then was measured by radioactivity outcome from the material under applied mechanical stress. [ 15 ]
https://en.wikipedia.org/wiki/Mechanically_stimulated_gas_emission
A mechanician is an engineer or a scientist working in the field of mechanics , or in a related or sub-field: engineering or computational mechanics , applied mechanics , geomechanics , biomechanics , and mechanics of materials . Names other than mechanician have been used occasionally, such as mechaniker and mechanicist . The term mechanician is also used by the Irish Navy to refer to junior engine room ratings. In the British Royal Navy , Chief Mechanicians and Mechanicians 1st Class were Chief Petty Officers , Mechanicians 2nd and 3rd Class were Petty Officers , Mechanicians 4th Class were Leading Ratings , and Mechanicians 5th Class were Able Ratings . [ 1 ] The rate was only applied to certain technical specialists and no longer exists. In the New Zealand Post Office , which provided telephone service prior to the formation of Telecom New Zealand in 1987, "Mechanician" was a job classification for workers who serviced telephone exchange switching equipment. The term seems to have originated in the era of the 7A Rotary system exchange, and was superseded by "Technician" circa 1975, perhaps because "Mechanician" was no longer considered appropriate after the first 2000 type Step-byStep Strowger switch exchanges began to be introduced in 1952 (in Auckland , at Birkenhead exchange). It is also the term by which makers of mechanical automata use in reference to their profession. by European Mechanics Society [1] , by Applied Mechanics Division , American Society of Mechanical Engineers by American Society of Civil Engineers by Society of Engineering Science, Inc.
https://en.wikipedia.org/wiki/Mechanician
Mechanics (from Ancient Greek μηχανική ( mēkhanikḗ ) ' of machines ' ) [ 1 ] [ 2 ] is the area of physics concerned with the relationships between force , matter , and motion among physical objects . [ 3 ] Forces applied to objects may result in displacements , which are changes of an object's position relative to its environment. Theoretical expositions of this branch of physics has its origins in Ancient Greece , for instance, in the writings of Aristotle and Archimedes [ 4 ] [ 5 ] [ 6 ] (see History of classical mechanics and Timeline of classical mechanics ). During the early modern period , scientists such as Galileo Galilei , Johannes Kepler , Christiaan Huygens , and Isaac Newton laid the foundation for what is now known as classical mechanics . As a branch of classical physics , mechanics deals with bodies that are either at rest or are moving with velocities significantly less than the speed of light. It can also be defined as the physical science that deals with the motion of and forces on bodies not in the quantum realm. The ancient Greek philosophers were among the first to propose that abstract principles govern nature. The main theory of mechanics in antiquity was Aristotelian mechanics , though an alternative theory is exposed in the pseudo-Aristotelian Mechanical Problems , often attributed to one of his successors. There is another tradition that goes back to the ancient Greeks where mathematics is used more extensively to analyze bodies statically or dynamically , an approach that may have been stimulated by prior work of the Pythagorean Archytas . [ 7 ] Examples of this tradition include pseudo- Euclid ( On the Balance ), Archimedes ( On the Equilibrium of Planes , On Floating Bodies ), Hero ( Mechanica ), and Pappus ( Collection , Book VIII). [ 8 ] [ 9 ] In the Middle Ages, Aristotle's theories were criticized and modified by a number of figures, beginning with John Philoponus in the 6th century. A central problem was that of projectile motion , which was discussed by Hipparchus and Philoponus. Persian Islamic polymath Ibn Sīnā published his theory of motion in The Book of Healing (1020). He said that an impetus is imparted to a projectile by the thrower, and viewed it as persistent, requiring external forces such as air resistance to dissipate it. [ 10 ] [ 11 ] [ 12 ] Ibn Sina made distinction between 'force' and 'inclination' (called "mayl"), and argued that an object gained mayl when the object is in opposition to its natural motion. So he concluded that continuation of motion is attributed to the inclination that is transferred to the object, and that object will be in motion until the mayl is spent. He also claimed that a projectile in a vacuum would not stop unless it is acted upon, consistent with Newton's first law of motion. [ 10 ] On the question of a body subject to a constant (uniform) force, the 12th-century Jewish-Arab scholar Hibat Allah Abu'l-Barakat al-Baghdaadi (born Nathanel, Iraqi, of Baghdad) stated that constant force imparts constant acceleration. According to Shlomo Pines , al-Baghdaadi's theory of motion was "the oldest negation of Aristotle 's fundamental dynamic law [namely, that a constant force produces a uniform motion], [and is thus an] anticipation in a vague fashion of the fundamental law of classical mechanics [namely, that a force applied continuously produces acceleration]." [ 13 ] Influenced by earlier writers such as Ibn Sina [ 12 ] and al-Baghdaadi, [ 14 ] the 14th-century French priest Jean Buridan developed the theory of impetus , which later developed into the modern theories of inertia , velocity , acceleration and momentum . This work and others was developed in 14th-century England by the Oxford Calculators such as Thomas Bradwardine , who studied and formulated various laws regarding falling bodies. The concept that the main properties of a body are uniformly accelerated motion (as of falling bodies) was worked out by the 14th-century Oxford Calculators . Two central figures in the early modern age are Galileo Galilei and Isaac Newton . Galileo's final statement of his mechanics, particularly of falling bodies, is his Two New Sciences (1638). Newton's 1687 Philosophiæ Naturalis Principia Mathematica provided a detailed mathematical account of mechanics, using the newly developed mathematics of calculus and providing the basis of Newtonian mechanics . [ 9 ] There is some dispute over priority of various ideas: Newton's Principia is certainly the seminal work and has been tremendously influential, and many of the mathematics results therein could not have been stated earlier without the development of the calculus. However, many of the ideas, particularly as pertain to inertia and falling bodies, had been developed by prior scholars such as Christiaan Huygens and the less-known medieval predecessors. Precise credit is at times difficult or contentious because scientific language and standards of proof changed, so whether medieval statements are equivalent to modern statements or sufficient proof, or instead similar to modern statements and hypotheses is often debatable. Two main modern developments in mechanics are general relativity of Einstein , and quantum mechanics , both developed in the 20th century based in part on earlier 19th-century ideas. The development in the modern continuum mechanics, particularly in the areas of elasticity, plasticity, fluid dynamics, electrodynamics, and thermodynamics of deformable media, started in the second half of the 20th century. The often-used term body needs to stand for a wide assortment of objects, including particles , projectiles , spacecraft , stars , parts of machinery , parts of solids , parts of fluids ( gases and liquids ), etc. Other distinctions between the various sub-disciplines of mechanics concern the nature of the bodies being described. Particles are bodies with little (known) internal structure, treated as mathematical points in classical mechanics. Rigid bodies have size and shape, but retain a simplicity close to that of the particle, adding just a few so-called degrees of freedom , such as orientation in space. Otherwise, bodies may be semi-rigid, i.e. elastic , or non-rigid, i.e. fluid . These subjects have both classical and quantum divisions of study. For instance, the motion of a spacecraft, regarding its orbit and attitude ( rotation ), is described by the relativistic theory of classical mechanics, while the analogous movements of an atomic nucleus are described by quantum mechanics. The following are the three main designations consisting of various subjects that are studied in mechanics. Note that there is also the " theory of fields " which constitutes a separate discipline in physics, formally treated as distinct from mechanics, whether it be classical fields or quantum fields . But in actual practice, subjects belonging to mechanics and fields are closely interwoven. Thus, for instance, forces that act on particles are frequently derived from fields ( electromagnetic or gravitational ), and particles generate fields by acting as sources. In fact, in quantum mechanics, particles themselves are fields, as described theoretically by the wave function . The following are described as forming classical mechanics: The following are categorized as being part of quantum mechanics: Historically, classical mechanics had been around for nearly a quarter millennium before quantum mechanics developed. Classical mechanics originated with Isaac Newton 's laws of motion in Philosophiæ Naturalis Principia Mathematica , developed over the seventeenth century. Quantum mechanics developed later, over the nineteenth century, precipitated by Planck's postulate and Albert Einstein's explanation of the photoelectric effect . Both fields are commonly held to constitute the most certain knowledge that exists about physical nature. Classical mechanics has especially often been viewed as a model for other so-called exact sciences . Essential in this respect is the extensive use of mathematics in theories, as well as the decisive role played by experiment in generating and testing them. Quantum mechanics is of a bigger scope, as it encompasses classical mechanics as a sub-discipline which applies under certain restricted circumstances. According to the correspondence principle , there is no contradiction or conflict between the two subjects, each simply pertains to specific situations. The correspondence principle states that the behavior of systems described by quantum theories reproduces classical physics in the limit of large quantum numbers , i.e. if quantum mechanics is applied to large systems (for e.g. a baseball), the result would almost be the same if classical mechanics had been applied. Quantum mechanics has superseded classical mechanics at the foundation level and is indispensable for the explanation and prediction of processes at the molecular, atomic, and sub-atomic level. However, for macroscopic processes classical mechanics is able to solve problems which are unmanageably difficult (mainly due to computational limits) in quantum mechanics and hence remains useful and well used. Modern descriptions of such behavior begin with a careful definition of such quantities as displacement (distance moved), time, velocity, acceleration, mass, and force. Until about 400 years ago, however, motion was explained from a very different point of view. For example, following the ideas of Greek philosopher and scientist Aristotle, scientists reasoned that a cannonball falls down because its natural position is in the Earth; the Sun, the Moon, and the stars travel in circles around the Earth because it is the nature of heavenly objects to travel in perfect circles. Often cited as father to modern science, Galileo brought together the ideas of other great thinkers of his time and began to calculate motion in terms of distance travelled from some starting position and the time that it took. He showed that the speed of falling objects increases steadily during the time of their fall. This acceleration is the same for heavy objects as for light ones, provided air friction (air resistance) is discounted. The English mathematician and physicist Isaac Newton improved this analysis by defining force and mass and relating these to acceleration. For objects traveling at speeds close to the speed of light, Newton's laws were superseded by Albert Einstein 's theory of relativity . [A sentence illustrating the computational complication of Einstein's theory of relativity.] For atomic and subatomic particles, Newton's laws were superseded by quantum theory . For everyday phenomena, however, Newton's three laws of motion remain the cornerstone of dynamics, which is the study of what causes motion. Akin to the distinction between quantum and classical mechanics, Albert Einstein 's general and special theories of relativity have expanded the scope of Newton and Galileo 's formulation of mechanics. The differences between relativistic and Newtonian mechanics become significant and even dominant as the velocity of a body approaches the speed of light . For instance, in Newtonian mechanics , the kinetic energy of a free particle is E = ⁠ 1 / 2 ⁠ mv 2 , whereas in relativistic mechanics, it is E = ( γ − 1) mc 2 (where γ is the Lorentz factor ; this formula reduces to the Newtonian expression in the low energy limit). [ 17 ] For high-energy processes, quantum mechanics must be adjusted to account for special relativity; this has led to the development of quantum field theory . [ 18 ]
https://en.wikipedia.org/wiki/Mechanics
Mechanics of Advanced Composite Structures is a biannual peer-reviewed open-access scientific journal published by Semnan University . The editor-in-chief is Abdoulhossein Fereidoon (Semnan University). The journal covers all aspects of research on composite structures . It was established in 2014 and is abstracted and indexed in Scopus . [ 1 ] This article about a materials science journal is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
https://en.wikipedia.org/wiki/Mechanics_of_Advanced_Composite_Structures
Mechanics of gelation describes processes relevant to sol-gel process . In a static sense, the fundamental difference between a liquid and a solid is that the solid has elastic resistance against a shearing stress while a liquid does not. Thus, a simple liquid will not typically support a transverse acoustic phonon , or shear wave . Gels have been described by Born as liquids in which an elastic resistance against shearing survives, yielding both viscous and elastic properties. It has been shown theoretically that in a certain low-frequency range, polymeric gels should propagate shear waves with relatively low damping. The distinction between a sol (solution) and a gel therefore appears to be understood in a manner analogous to the practical distinction between the elastic and plastic deformation ranges of a metal. The distinction lies in the ability to respond to an applied shear force via macroscopic viscous flow. [ 1 ] [ 2 ] [ 3 ] In a dynamic sense, the response of a gel to an alternating force (oscillation or vibration) will depend upon the period or frequency of vibration. As indicated here, even most simple liquids will exhibit some elastic response at shear rates or frequencies exceeding 5 x 10 6 cycles per second. Experiments on such short time scales probe the fundamental motions of the primary particles (or particle clusters) which constitute the lattice structure or aggregate. The increasing resistance of certain liquids to flow at high stirring speeds is one manifestation of this phenomenon. The ability of a condensed body to respond to a mechanical force by viscous flow is thus strongly dependent on the time scale over which the load is applied, and thus the frequency and amplitude of the stress wave in oscillatory experiments. [ 4 ] [ 5 ] [ 6 ] The structural relaxation of a viscoelastic gel has been identified as primary mechanism responsible for densification and associated pore evolution in both colloidal and polymeric silica gels. [ 7 ] Experiments in the viscoelastic properties of such skeletal networks on various time scales require a force varying with a period (or frequency) appropriate to the relaxation time of the phenomenon investigated, and inversely proportional to the distance over which such relaxation occurs. High frequencies associated with ultrasonic waves have been used extensively in the handling of polymer solutions, liquids and gels and the determination of their viscoelastic properties. Static measurements of the shear modulus have been made, [ 8 ] as well as dynamic measurements of the speed of propagation of shear waves, [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] which yields the dynamic modulus of rigidity . Dynamic light scattering (DLS) techniques have been utilized in order to monitor the dynamics of density fluctuations through the behavior of the autocorrelation function near the point of gelation. Tanaka et al ., emphasize that the discrete and reversible volume transitions which occur in partially hydrolyzed acrylamide gels can be interpreted in terms of a phase transition of the system consisting of the charged polymer network, hydrogen (counter)ions and liquid matrix. The phase transition is a manifestation of competition among the three forces which contribute to the osmotic pressure in the gel: The balance of these forces varies with change in temperature or solvent properties. The total osmotic pressure acting on the system is the sum osmotic pressure of the gel. It is further shown that the phase transition can be induced by the application of an electric field across the gel. The volume change at the transition point is either discrete (as in a first-order Ehrenfest transition) or continuous (second order Ehrenfest analogy), depending on the degree of ionization of the gel and on the solvent composition. [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] [ 26 ] The gel is thus interpreted as an elastic continuum, which deforms when subjected to externally applied shear forces, but is incompressible upon application of hydrostatic pressure. This combination of fluidity and rigidity is explained in terms of the gel structure: that of a liquid contained within a fibrous polymer network or matrix by the extremely large friction between the liquid and the fiber or polymer network. Thermal fluctuations may produce infinitesimal expansion or contraction within the network, and the evolution of such fluctuations will ultimately determine the molecular morphology and the degree of hydration of the body. Quasi-elastic light scattering offers direct experimental access to measurement of the wavelength and lifetimes of critical fluctuations, which are governed by the viscoelastic properties of the gel. It is reasonable to expect a relationship between the amplitude of such fluctuations and the elasticity of the network. Since the elasticity measures the resistance of the network to either elastic (reversible) or plastic (irreversible) deformation, the fluctuations should grow larger as the elasticity declines. The divergence of the scattered light intensity at a finite critical temperature implies that the elasticity approaches zero, or the compressibility becomes infinite, which is the typically observed behavior of a system at the point of instability. Thus, at the critical point, the polymer network offers no resistance at all to any form of deformation. The rate of relaxation of density fluctuations will be rapid if the restoring force, which depends upon the network elasticity, is large—and if the friction between the network and the interstitial fluid is small. The theory suggests that the rate is directly proportional to the elasticity and inversely proportional to the frictional force. The friction in turn depends upon both the viscosity of the fluid and the average size of the pores contained within the polymer network. Thus, if the elasticity is inferred from the measurements of the scattering intensity, and the viscosity is determined independently (via mechanical methods such as ultrasonic attenuation) measurement of the relaxation rate yields information on the pore size distribution contained within the polymer network, e.g. large fluctuations in polymer density near the critical point yield large density differentials with a corresponding bimodal distribution of porosity. The difference in average size between the smaller pores (in the highly dense regions) and the larger pores (in regions of lower average density) will therefore depend upon the degree of phase separation which is allowed to occur before such fluctuations become thermally arrested or "frozen in" at or near the critical point of the transition.
https://en.wikipedia.org/wiki/Mechanics_of_gelation
In biology , a mechanism is a system of causally interacting parts and processes that produce one or more effects. [ 1 ] Phenomena can be explained by describing their mechanisms. For example, natural selection is a mechanism of evolution ; other mechanisms of evolution include genetic drift , mutation , and gene flow . In ecology , mechanisms such as predation and host-parasite interactions produce change in ecological systems . In practice, no description of a mechanism is ever complete because not all details of the parts and processes of a mechanism are fully known. For example, natural selection is a mechanism of evolution that includes countless, inter-individual interactions with other individuals, components, and processes of the environment in which natural selection operates. Many characterizations/definitions of mechanisms in the philosophy of science /biology have been provided in the past decades. For example, one influential characterization of neuro- and molecular biological mechanisms by Peter K. Machamer , Lindley Darden and Carl Craver is as follows: mechanisms are entities and activities organized such that they are productive of regular changes from start to termination conditions. [ 2 ] Other characterizations have been proposed by Stuart Glennan (1996, 2002), who articulates an interactionist account of mechanisms, and William Bechtel (1993, 2006), who emphasizes parts and operations. [ 2 ] The characterization by Machemer et al. is as follows: mechanisms are entities and activities organized such that they are predictive of changes from start conditions to termination conditions. There are three distinguishable aspects of this characterization: Mechanisms in science/biology have reappeared as a subject of philosophical analysis and discussion in the last several decades because of a variety of factors, many of which relate to metascientific issues such as explanation and causation . For example, the decline of Covering Law (CL) models of explanation, e.g., Hempel's deductive-nomological model , has stimulated interest how mechanisms might play an explanatory role in certain domains of science , especially higher-level disciplines such as biology (i.e., neurobiology, molecular biology, neuroscience, and so on). This is not just because of the philosophical problem of giving some account of what "laws of nature," which CL models encounter, but also the incontrovertible fact that most biological phenomena are not characterizable in nomological terms (i.e., in terms of lawful relationships). For example, protein biosynthesis does not occur according to any law, and therefore, on the DN model, no explanation for the biosynthesis phenomenon could be given. Mechanistic explanations come in many forms. Wesley Salmon proposed what he called the "ontic" conception of explanation, which states that explanations are mechanisms and causal processes in the world . There are two such kinds of explanation: etiological and constitutive . Salmon focused primarily on etiological explanation, with respect to which one explains some phenomenon P by identifying its causes (and, thus, locating it within the causal structure of the world). Constitutive (or componential) explanation, on the other hand, involves describing the components of a mechanism M that is productive of (or causes) P. Indeed, whereas (a) one may differentiate between descriptive and explanatory adequacy, where the former is characterized as the adequacy of a theory to account for at least all the items in the domain (which need explaining), and the latter as the adequacy of a theory to account for no more than those domain items, and (b) past philosophies of science differentiate between descriptions of phenomena and explanations of those phenomena, in the non-ontic context of mechanism literature, descriptions and explanations seem to be identical. This is to say, to explain a mechanism M is to describe it (specify its components, as well as background, enabling, and so on, conditions that constitute, in the case of a linear mechanism, its "start conditions").
https://en.wikipedia.org/wiki/Mechanism_(biology)