text
stringlengths
11
320k
source
stringlengths
26
161
Mill's methods are five methods of induction described by philosopher John Stuart Mill in his 1843 book A System of Logic . [ 1 ] [ 2 ] They are intended to establish a causal relationship between two or more groups of data, analyzing their respective differences and similarities. If two or more instances of the phenomenon under investigation have only one circumstance in common, the circumstance in which alone all the instances agree, is the cause (or effect) of the given phenomenon. For a property to be a necessary condition it must always be present if the effect is present. Since this is so, then we are interested in looking at cases where the effect is present and taking note of which properties, among those considered to be 'possible necessary conditions' are present and which are absent. Obviously, any properties which are absent when the effect is present cannot be necessary conditions for the effect. This method is also referred to more generally within comparative politics as the most different systems design. Symbolically, the method of agreement can be represented as: To further illustrate this concept, consider two structurally different countries. Country A is a former colony, has a centre-left government, and has a federal system with two levels of government. Country B has never been a colony, has a centre-left government and is a unitary state. One factor that both countries have in common, the dependent variable in this case, is that they have a system of universal health care . Comparing the factors known about the countries above, a comparative political scientist would conclude that the government sitting on the centre-left of the spectrum would be the independent variable which causes a system of universal health care, since it is the only one of the factors examined which holds constant between the two countries, and the theoretical backing for that relationship is sound; social democratic (centre-left) policies often include universal health care. If an instance in which the phenomenon under investigation occurs, and an instance in which it does not occur, have every circumstance save one in common, that one occurring only in the former; the circumstance in which alone the two instances differ, is the effect, or cause, or an indispensable part of the cause, of the phenomenon. This method is also known more generally as the most similar systems design within comparative politics. As an example of the method of difference, consider two similar countries. Country A has a centre-right government, a unitary system and was a former colony. Country B has a centre-right government, a unitary system but was never a colony. The difference between the countries is that Country A readily supports anti-colonial initiatives, whereas Country B does not. The method of difference would identify the independent variable to be the status of each country as a former colony or not, with the dependant variable being supportive for anti-colonial initiatives. This is because, out of the two similar countries compared, the difference between the two is whether or not they were formerly a colony. This then explains the difference on the values of the dependent variable, with the former colony being more likely to support decolonization than the country with no history of being a colony. If two or more instances in which the phenomenon occurs have only one circumstance in common, while two or more instances in which it does not occur have nothing in common save the absence of that circumstance; the circumstance in which alone the two sets of instances differ, is the effect, or cause, or a necessary part of the cause, of the phenomenon. Also called the "Joint Method of Agreement and Difference", this principle is a combination of two methods of agreement. Despite the name, it is weaker than the direct method of difference and does not include it. Symbolically, the Joint method of agreement and difference can be represented as: Subduct [ 3 ] from any phenomenon such part as is known by previous inductions to be the effect of certain antecedents, and the residue of the phenomenon is the effect of the remaining antecedents. If a range of factors are believed to cause a range of phenomena, and we have matched all the factors, except one, with all the phenomena, except one, then the remaining phenomenon can be attributed to the remaining factor. Symbolically, the Method of Residue can be represented as: Whatever phenomenon varies in any manner whenever another phenomenon varies in some particular manner, is either a cause or an effect of that phenomenon, or is connected with it through some fact of causation. If across a range of circumstances leading to a phenomenon, some property of the phenomenon varies in tandem with some factor existing in the circumstances, then the phenomenon can be associated with that factor. For instance, suppose that various samples of water, each containing both salt and lead , were found to be toxic. If the level of toxicity varied in tandem with the level of lead, one could attribute the toxicity to the presence of lead. Symbolically, the method of concomitant variation can be represented as (with ± representing a shift): Unlike the preceding four inductive methods, the method of concomitant variation doesn't involve the elimination of any circumstance . Changing the magnitude of one factor results in the change in the magnitude of another factor.
https://en.wikipedia.org/wiki/Mill's_methods
A mill is a device, often a structure , machine or kitchen appliance , that breaks solid materials into smaller pieces by grinding, crushing, or cutting. Such comminution is an important unit operation in many processes . There are many different types of mills and many types of materials processed in them. Historically, mills were powered by hand or by animals (e.g., via a hand crank ), working animal (e.g., horse mill ), wind ( windmill ) or water ( watermill ). In the modern era, they are usually powered by electricity . The grinding of solid materials occurs through mechanical forces that break up the structure by overcoming the interior bonding forces. After the grinding the state of the solid is changed: the grain size, the grain size disposition and the grain shape. Milling also refers to the process of breaking down, separating, sizing, or classifying aggregate material (e.g. mining ore ). For instance rock crushing or grinding to produce uniform aggregate size for construction purposes, or separation of rock, soil or aggregate material for the purposes of structural fill or land reclamation activities. Aggregate milling processes are also used to remove or separate contamination or moisture from aggregate or soil and to produce "dry fills" prior to transport or structural filling. Grinding may serve the following purposes in engineering: In spite of a great number of studies in the field of fracture schemes there is no formula known which connects the technical grinding work with grinding results. Mining engineers, Peter von Rittinger , Friedrich Kick and Fred Chester Bond independently produced equations to relate the needed grinding work to the grain size produced and a fourth engineer, R.T.Hukki suggested that these three equations might each describe a narrow range of grain sizes and proposed uniting them along a single curve describing what has come to be known as the Hukki relationship . [ 1 ] [ 2 ] [ 3 ] In stirred mills, the Hukki relationship does not apply and instead, experimentation has to be performed to determine any relationship. [ 4 ] To evaluate the grinding results the grain size disposition of the source material (1) and of the ground material (2) is needed. Grinding degree is the ratio of the sizes from the grain disposition. There are several definitions for this characteristic value: In materials processing a grinder is a machine for producing fine particle size reduction through attrition and compressive forces at the grain size level. See also crusher for mechanisms producing larger particles. In general, grinding processes require a relatively large amount of energy; for this reason, an experimental method to measure the energy used locally during milling with different machines was recently proposed. [ 5 ] Autogenous or autogenic mills are so-called due to the self-grinding of the ore: a rotating drum throws larger rocks of ore in a cascading motion which causes impact breakage of larger rocks and compressive grinding of finer particles. It is similar in operation to a SAG mill as described below but does not use steel balls in the mill. Also known as ROM or "Run Of Mine" grinding. A typical type of fine grinder is the ball mill . A slightly inclined or horizontal rotating cylinder is partially filled with balls , usually stone or metal , which grind material to the necessary fineness by friction and impact with the tumbling balls. Ball mills normally operate with an approximate ball charge of 30%. Ball mills are characterized by their smaller (comparatively) diameter and longer length, and often have a length 1.5 to 2.5 times the diameter. The feed is at one end of the cylinder and the discharge is at the other. Ball mills are commonly used in the manufacture of Portland cement and finer grinding stages of mineral processing. Industrial ball mills can be as large as 8.5 m (28 ft) in diameter with a 22 MW motor, [ 6 ] drawing approximately 0.0011% of the total world's power (see List of countries by electricity consumption ). However, small versions of ball mills can be found in laboratories where they are used for grinding sample material for quality assurance. The power predictions for ball mills typically use the following form of the Bond equation: [ 7 ] where Another type of fine grinder commonly used is the French buhrstone mill, which is similar to old-fashioned flour mills . A high pressure grinding roll, often referred to as HPGRs or roller press, consists out of two rollers with the same dimensions, which are rotating against each other with the same circumferential speed. The special feeding of bulk material through a hopper leads to a material bed between the two rollers. The bearing units of one roller can move linearly and are pressed against the material bed by springs or hydraulic cylinders. The pressures in the material bed are greater than 50 MPa (7,000 PSI ). In general they achieve 100 to 300 MPa. By this the material bed is compacted to a solid volume portion of more than 80%. The roller press has a certain similarity to roller crushers and roller presses for the compacting of powders, but purpose, construction and operation mode are different. Extreme pressure causes the particles inside of the compacted material bed to fracture into finer particles and also causes microfracturing at the grain size level. Compared to ball mills HPGRs achieve a 30 to 50% lower specific energy consumption, although they are not as common as ball mills since they are a newer technology. A similar type of intermediate crusher is the edge runner, which consists of a circular pan with two or more heavy wheels known as mullers rotating within it; material to be crushed is shoved underneath the wheels using attached plow blades. A rotating drum causes friction and attrition between rock pebbles and ore particles. May be used where product contamination by iron from steel balls must be avoided. Quartz or silica is commonly used because it is inexpensive to obtain. A rotating drum causes friction and attrition between steel rods and ore particles. [ citation needed ] But the term 'rod mill' is also used as a synonym for a slitting mill , which makes rods of iron or other metal. Rod mills are less common than ball mills for grinding minerals. The rods used in the mill, usually a high-carbon steel, can vary in both the length and the diameter. However, the smaller the rods, the larger is the total surface area and hence, the greater the grinding efficiency. [ 8 ] SAG is an acronym for semi-autogenous grinding. SAG mills are autogenous mills that also use grinding balls like a ball mill. A SAG mill is usually a primary or first stage grinder. SAG mills use a ball charge of 8 to 21%. [ 9 ] [ 10 ] The largest SAG mill is 12.8m (42') in diameter, powered by a 28 MW (38,000 HP) motor. [ 11 ] A SAG mill with a 13.4m (44') diameter and a power of 35 MW (47,000 HP) has been designed. [ 12 ] Attrition between grinding balls and ore particles causes grinding of finer particles. SAG mills are characterized by their large diameter and short length as compared to ball mills. The inside of the mill is lined with lifting plates to lift the material inside the mill, where it then falls off the plates onto the rest of the ore charge. SAG mills are primarily used at gold, copper and platinum mines with applications also in the lead, zinc, silver, alumina and nickel industries. Tower mills, often called vertical mills, stirred mills or regrind mills, are a more efficient means of grinding material at smaller particle sizes, and can be used after ball mills in a grinding process. Like ball mills, grinding (steel) balls or pebbles are often added to stirred mills to help grind ore, however these mills contain a large screw mounted vertically to lift and grind material. In tower mills, there is no cascading action as in standard grinding mills. Stirred mills are also common for mixing quicklime (CaO) into a lime slurry. There are several advantages to the tower mill: low noise, efficient energy usage, and low operating costs. A VSI mill throws rock or ore particles against a wear plate by slinging them from a spinning center that rotates on a vertical shaft. This type of mill uses the same principle as a VSI crusher .
https://en.wikipedia.org/wiki/Mill_(grinding)
A mill pond (or millpond ) is a body of water used as a reservoir for a water-powered mill . [ 1 ] [ 2 ] Mill ponds were often created through the construction of a mill dam or weir (and mill stream) across a waterway . In many places, the common proper name Mill Pond has remained even though the mill has long since gone. It may be fed by a man-made stream, [ 3 ] known by several terms including leat and mill stream. The channel or stream leading from the mill pond is the mill race , which together with weirs, dams, channels and the terrain establishing the mill pond, delivers water to the mill wheel to convert potential and/or kinetic energy of the water to mechanical energy by rotating the mill wheel. The production of mechanical power is the purpose of this civil engineering hydraulic system. The term mill pond is often used colloquially and in literature to refer to a very flat body of water. [ 2 ] Witnesses of the loss of RMS Titanic reported that the sea was "like a mill pond". [ 2 ] [ 4 ]
https://en.wikipedia.org/wiki/Mill_pond
Mill scale , often shortened to just scale , is the flaky surface of hot rolled steel , consisting of the mixed iron oxides iron(II) oxide ( FeO , wüstite ), iron(III) oxide ( Fe 2 O 3 , hematite ), and iron(II,III) oxide ( Fe 3 O 4 , magnetite ). Mill scale is formed on the outer surfaces of plates, sheets or profiles when they are produced by passing red hot iron or steel billets through rolling mills. [ 1 ] Mill scale is bluish-black in color. It is usually less than 0.1 mm (0.0039 in) thick, and initially adheres to the steel surface and protects it from atmospheric corrosion provided no break occurs in this coating. Because it is electrochemically cathodic to steel, any break in the mill scale coating will cause accelerated corrosion of steel exposed at the break. Mill scale is thus a boon for a while, until its coating breaks due to handling of the steel product or due to any other mechanical cause. Mill scale becomes a nuisance when the steel is to be processed. Any paint applied over it is wasted, since it will come off with the scale as moisture-laden air gets under it. Thus mill scale can be removed from steel surfaces by flame cleaning , pickling , or abrasive blasting , which are all tedious operations that consume energy. This is why shipbuilders and steel fixers used to leave steel and rebar delivered freshly rolled from mills out in the open to allow it to 'weather' until most of the scale fell off due to atmospheric action. Nowadays, most steel mills can supply their product with mill scale removed and steel coated with shop primers over which welding or painting can be done safely. Mill scale generated in rolling mills will be collected and sent to a sinter plant for recycling . Mill scale is sought after by select abstract expressionist artists [ like whom? ] as its effect on steel can cause unpredicted and seemingly random abstract organic visual effects. [ citation needed ] Although the majority of mill scale is removed from steel during its passage through scale breaker rolls during manufacturing, smaller structurally inconsequential residue can be visible. Leveraging this processing vestige by accelerating its corrosive effects through the metallurgical use of phosphoric acid or in conjunction with selenium dioxide can create a high contrast visual substrate onto which other compositional elements can be added. Mill scale can be used as a raw material in granular refractory . When this refractory is cast and preheated, these scales provide escape routes for the evaporating water vapor, thus preventing cracks and resulting in a strong, monolithic structure. Mill scale is a complex oxide that contains around 70% iron with traces of nonferrous metals and alkaline compounds. Reduced iron powder may be obtained by conversion of mill scale into a single highest oxide i.e. hematite ( Fe 2 O 3 ) followed by reduction with hydrogen. Shahid and Choi reported the reverse co-precipitation method for the synthesis of magnetite ( Fe 3 O 4 ) from mill scale and used for multiple environmental applications such as nutrient recovery, [ 2 ] ballasted coagulation in activated sludge process, and heavy metal remediation in an aqueous environment. [ 3 ]
https://en.wikipedia.org/wiki/Mill_scale
A mill test report ( MTR ) and often also called a certified mill test report, certified material test report, mill test certificate (MTC), inspection certificate, certificate of test, or a host of other names, is a quality assurance document used in the metals industry that certifies a material's chemical and physical properties and states a product made of metal (steel, aluminum, brass or other alloys ) complies with an international standards organization (such as ANSI, ASME, etc.) specific standards. Mill here refers to an industry which manufactures and processes raw materials. An MTC provides traceability and assurance to the end user about the quality of the steel used and the process used to produce it. Typically a European MTC will be produced to EN 10204. [ 1 ] High quality steels for pressure vessel of structural purposes will be declared to 2.1 or 2.2 or certificated to 3.1 or 3.2. (EDIT: type is declared not by chapter in the document, but by type name, so edited the numbering) The MTC will specify the type of certificate, the grade of steel and any addenda. It will also specify the results of chemical and physical examination to allow the purchaser or end user to compare the plate to the requirements of the relevant standards. In steel industry There are mainly two types of MTC in steel industry, as for steel plates or steel pipes, there must be specific inspection scope or lists:
https://en.wikipedia.org/wiki/Mill_test_report_(metals_industry)
Miller and Mochen, Ltd. is a petroleum consulting company based in Houston, Texas. The firm provides services including reserves certifications, audits, and independent evaluations. [ 1 ] They prepare evaluations according to the standards of the United States Securities and Exchange Commission (SEC) Regulation S-X and the Petroleum Resources Management System (PRMS) published by the Society of Petroleum Engineers (SPE). The Chairman of the Board is Robert Oberst. [ 2 ] The Senior Vice Presidents are Leslie Fallon and Gary Knapp. [ citation needed ] Miller and Lents, Ltd. prepares reserves estimates by applying both SEC and SPE-PRMS standards. These estimates include the assessment of developed and undeveloped reserves and classification according to Proved, Probable, Possible, Contingent, and Prospective Resources definitions. They also evaluate relevant economic parameters and creates financial reports for the United States Securities and Exchange Commission (SEC), the London Stock Exchange (LSE), and the Alternative Investment Market (AIM); cash flow projections; forecasts of future prices; and estimates of Fair Market Value . They perform geologic studies including: seismic studies, structural studies, stratigraphic studies, subsurface mapping, field development studies, and reservoir characterization. In addition, they perform petrophysical analyses such as log analysis and core analysis studies. Miller and Lents, Ltd. provides services to domestic and international clients, with a significant portion of their business coming from clients operating in Russia. In addition to evaluations for clients operating in Russia, [ 3 ] Miller and Lents, Ltd. has performed evaluations for clients in the United States, [ 4 ] Azerbaijan, [ 5 ] Israel, [ 6 ] Kazakhstan, [ 7 ] the United Kingdom, [ 8 ] Australia, [ 9 ] and Lithuania, [ 10 ] among others. In 1948, J. R. Butler and Martin Miller formed an oil and gas consulting partnership known as J. R. Butler and Company. Max Lents, who was not a partner at the beginning of J. R. Butler and Company, was considered as an original founding partner when he joined the firm a year later. The company name then changed to Butler, Miller and Lents. In 1970 its name was changed to Butler, Miller and Lents, Ltd., at which time it became a Subchapter S Corporation . In 1976, the name of the firm was changed to its current name, Miller and Lents, Ltd. after J. R. Butler exchanged his interest in Butler, Miller and Lents, Ltd. In addition to founding Miller and Lents, Ltd., Max Lents and Martin Miller made significant contributions to the field of petroleum engineering. They introduced the Miller-Lents Permeability Distribution which aids in describing the permeability of heterogeneous reservoirs and provides a “better match with actual field performance when applied to cycling operations in gas condensate reservoirs.” [ 11 ] Steiber was a Petroleum Engineer with Miller and Lents, Ltd. from 1974 to 2004. He made significant contributions to the field of Petroleum Engineering and the practice of Oil and Gas Well log analysis. In his paper “The Distribution of Shale in Sandstones and its Effect upon Porosity ,” co-authored by E.C. Thomas in 1975, he introduced the Thomas-Steiber Diagram which is still commonly used for log analysis today. [ 12 ]
https://en.wikipedia.org/wiki/Miller_and_Lents
In electronics , the Miller effect (named after its discoverer John Milton Miller ) accounts for the increase in the equivalent input capacitance of an inverting voltage amplifier due to amplification of the effect of capacitance between the amplifier's input and output terminals, and is given by where − A v {\displaystyle -A_{v}} is the voltage gain of the inverting amplifier ( A v {\displaystyle A_{v}} positive) and C {\displaystyle C} is the feedback capacitance. Although the term Miller effect normally refers to capacitance, any impedance connected between the input and another node exhibiting gain can modify the amplifier input impedance via this effect. These properties of the Miller effect are generalized in the Miller theorem . The Miller capacitance due to undesired parasitic capacitance between the output and input of active devices like transistors and vacuum tubes is a major factor limiting their gain at high frequencies. When Miller published his work in 1919, [ 1 ] he was working on vacuum tube triodes . The same analysis applies to modern devices such as bipolar junction and field-effect transistors . Consider a circuit of an ideal inverting voltage amplifier of gain − A v {\displaystyle -A_{v}} with an impedance Z {\displaystyle Z} connected between its input and output nodes. The output voltage is therefore V o = − A v V i {\displaystyle V_{o}=-A_{v}V_{i}} . Assuming that the amplifier input draws no current, all of the input current flows through Z {\displaystyle Z} , and is therefore given by The input impedance of the circuit is In the Laplace domain (where s {\displaystyle s} represents complex frequency), if Z {\displaystyle Z} consists of just a capacitor forming a complex impedance Z = 1 s C {\displaystyle Z={\frac {1}{sC}}} , then the circuit's resulting input impedance will be equivalent to that of a larger capacitance C M {\displaystyle C_{M}} : This Miller capacitance C M {\displaystyle C_{M}} is the physical capacitance C {\displaystyle C} multiplied by the factor ( 1 + A v ) {\displaystyle (1+A_{v})} . [ 2 ] As most amplifiers are inverting ( A v {\displaystyle A_{v}} as defined above is positive), the effective capacitance at their inputs is increased due to the Miller effect. This can reduce the bandwidth of the amplifier, restricting its range of operation to lower frequencies. The tiny junction and stray capacitances between the base and collector terminals of a Darlington transistor , for example, may be drastically increased by the Miller effects due to its high gain, lowering the high frequency response of the device. It is also important to note that the Miller capacitance is the capacitance seen looking into the input. If looking for all of the RC time constants (poles) it is important to include as well the capacitance seen by the output. The capacitance on the output is often neglected since it sees C ( 1 + 1 A v ) {\displaystyle {C}({1+{\tfrac {1}{A_{v}}}})} and amplifier outputs are typically low impedance. However if the amplifier has a high impedance output, such as if a gain stage is also the output stage, then this RC can have a significant impact on the performance of the amplifier. This is when pole splitting techniques are used. The Miller effect may also be exploited to synthesize larger capacitors from smaller ones. One such example is in the stabilization of feedback amplifiers , where the required capacitance may be too large to practically include in the circuit. This may be particularly important in the design of integrated circuits , where capacitors can consume significant area, increasing costs. The Miller effect may be undesired in many cases, and approaches may be sought to lower its impact. Several such techniques are used in the design of amplifiers. A current buffer stage may be added at the output to lower the gain A v {\displaystyle A_{v}} between the input and output terminals of the amplifier (though not necessarily the overall gain). For example, a common base may be used as a current buffer at the output of a common emitter stage, forming a cascode . This will typically reduce the Miller effect and increase the bandwidth of the amplifier. Alternatively, a voltage buffer may be used before the amplifier input, reducing the effective source impedance seen by the input terminals. This lowers the R C {\displaystyle RC} time constant of the circuit and typically increases the bandwidth. The Miller capacitance can be mitigated by employing neutralisation . This can be achieved by feeding back an additional signal that is in phase opposition to that which is present at the stage output. By feeding back such a signal via a suitable capacitor, the Miller effect can, at least in theory, be eliminated entirely. In practice, variations in the capacitance of individual amplifying devices coupled with other stray capacitances, makes it difficult to design a circuit such that total cancellation occurs. Historically, it was not unknown for the neutralising capacitor to be selected on test to match the amplifying device, particularly with early transistors that had very poor bandwidths. The derivation of the phase inverted signal usually requires an inductive component such as a choke or an inter-stage transformer. In vacuum tubes , an extra grid (the screen grid) could be inserted between the control grid and the anode. This had the effect of screening the anode from the grid and substantially reducing the capacitance between them. While the technique was initially successful other factors limited the advantage of this technique as the bandwidth of tubes improved. Later tubes had to employ very small grids (the frame grid) to reduce the capacitance to allow the device to operate at frequencies that were impossible with the screen grid. Figure 2A shows an example of Figure 1 where the impedance coupling the input to the output is the coupling capacitor C C {\displaystyle C_{C}} . Thévenin voltage source V A {\displaystyle V_{A}} drives the circuit with Thévenin resistance R A {\displaystyle R_{A}} . The output impedance of the amplifier is considered low enough that the relationship V o = − A v V i {\displaystyle V_{o}=-A_{v}V_{i}} is presumed to hold. At the output, Z L {\displaystyle Z_{L}} serves as the load. (The load is irrelevant to this discussion: it just provides a path for the current to leave the circuit.) In Figure 2A, the coupling capacitor delivers a current j ω C C ( V i − V o ) {\textstyle j\omega C_{C}(V_{i}-V_{o})} to the output node. Figure 2B shows a circuit electrically identical to Figure 2A using Miller's theorem. The coupling capacitor is replaced on the input side of the circuit by the Miller capacitance C M {\displaystyle C_{M}} , which draws the same current from the driver as the coupling capacitor in Figure 2A. Therefore, the driver sees exactly the same loading in both circuits. On the output side, the same current from the output as drawn from the coupling capacitor in Figure 2A is instead drawn from a capacitor C M o {\displaystyle C_{Mo}} equal to: C M o = ( 1 + 1 A v ) C C . {\displaystyle C_{Mo}=(1+{\frac {1}{A_{v}}})C_{C}.} In order for the Miller capacitance to draw the same current in Figure 2B as the coupling capacitor in Figure 2A, the Miller transformation is used to relate C M {\displaystyle C_{M}} to C C {\displaystyle C_{C}} . In this example, this transformation is equivalent to setting the currents equal, that is or, rearranging this equation This result is the same as C M {\displaystyle C_{M}} of the Derivation Section . The present example with A v {\displaystyle A_{v}} frequency independent shows the implications of the Miller effect, and therefore of C C {\displaystyle C_{C}} , upon the frequency response of this circuit, and is typical of the impact of the Miller effect (see, for example, common source ). If C C {\displaystyle C_{C}} is 0, the output voltage of the circuit is simply A v v A {\displaystyle A_{v}v_{A}} , independent of frequency. However, when C C {\displaystyle C_{C}} is not zero, Figure 2B shows the large Miller capacitance appears at the input of the circuit. The voltage output of the circuit now becomes and rolls off with frequency once frequency is high enough that ω C M R A ≥ 1. It is a low-pass filter . In analog amplifiers this curtailment of frequency response is a major implication of the Miller effect. In this example, the frequency ω 3dB such that ω 3dB C M R A = 1 marks the end of the low-frequency response region and sets the bandwidth or cutoff frequency of the amplifier. The effect of C M upon the amplifier bandwidth is greatly reduced for low impedance drivers ( C M R A is small if R A is small). Consequently, one way to minimize the Miller effect upon bandwidth is to use a low-impedance driver, for example, by interposing a voltage follower stage between the driver and the amplifier, which reduces the apparent driver impedance seen by the amplifier. The output voltage of this simple circuit is always A v v i . However, real amplifiers have output resistance. If the amplifier output resistance is included in the analysis, the output voltage exhibits a more complex frequency response and the impact of the frequency-dependent current source on the output side must be taken into account. [ 3 ] Ordinarily these effects show up only at frequencies much higher than the roll-off due to the Miller capacitance, so the analysis presented here is adequate to determine the useful frequency range of an amplifier dominated by the Miller effect. This example also assumes A v is frequency independent, but more generally there is frequency dependence of the amplifier contained implicitly in A v . Such frequency dependence of A v also makes the Miller capacitance frequency dependent, so interpretation of C M as a capacitance becomes more difficult. However, ordinarily any frequency dependence of A v arises only at frequencies much higher than the roll-off with frequency caused by the Miller effect, so for frequencies up to the Miller-effect roll-off of the gain, A v is accurately approximated by its low-frequency value. Determination of C M using A v at low frequencies is the so-called Miller approximation . [ 2 ] With the Miller approximation, C M becomes frequency independent, and its interpretation as a capacitance at low frequencies is secure.
https://en.wikipedia.org/wiki/Miller_effect
Miller indices form a notation system in crystallography for lattice planes in crystal (Bravais) lattices . In particular, a family of lattice planes of a given (direct) Bravais lattice is determined by three integers h , k , and ℓ , the Miller indices . They are written ( hkℓ ), and denote the family of (parallel) lattice planes (of the given Bravais lattice) orthogonal to g h k ℓ = h b 1 + k b 2 + ℓ b 3 {\displaystyle \mathbf {g} _{hk\ell }=h\mathbf {b} _{1}+k\mathbf {b} _{2}+\ell \mathbf {b} _{3}} , where b i {\displaystyle \mathbf {b} _{i}} are the basis or primitive translation vectors of the reciprocal lattice for the given Bravais lattice. (Note that the plane is not always orthogonal to the linear combination of direct or original lattice vectors h a 1 + k a 2 + ℓ a 3 {\displaystyle h\mathbf {a} _{1}+k\mathbf {a} _{2}+\ell \mathbf {a} _{3}} because the direct lattice vectors need not be mutually orthogonal.) This is based on the fact that a reciprocal lattice vector g {\displaystyle \mathbf {g} } (the vector indicating a reciprocal lattice point from the reciprocal lattice origin) is the wavevector of a plane wave in the Fourier series of a spatial function (e.g., electronic density function) which periodicity follows the original Bravais lattice, so wavefronts of the plane wave are coincident with parallel lattice planes of the original lattice. Since a measured scattering vector in X-ray crystallography , Δ k = k o u t − k i n {\displaystyle \Delta \mathbf {k} =\mathbf {k} _{\mathrm {out} }-\mathbf {k} _{\mathrm {in} }} with k o u t {\displaystyle \mathbf {k} _{\mathrm {out} }} as the outgoing (scattered from a crystal lattice) X-ray wavevector and k i n {\displaystyle \mathbf {k} _{\mathrm {in} }} as the incoming (toward the crystal lattice) X-ray wavevector, is equal to a reciprocal lattice vector g {\displaystyle \mathbf {g} } as stated by the Laue equations , the measured scattered X-ray peak at each measured scattering vector Δ k {\displaystyle \Delta \mathbf {k} } is marked by Miller indices . By convention, negative integers are written with a bar, as in 3 for −3. The integers are usually written in lowest terms, i.e. their greatest common divisor should be 1. Miller indices are also used to designate reflections in X-ray crystallography . In this case the integers are not necessarily in lowest terms, and can be thought of as corresponding to planes spaced such that the reflections from adjacent planes would have a phase difference of exactly one wavelength (2 π ), regardless of whether there are atoms on all these planes or not. There are also several related notations: [ 1 ] In the context of crystal directions (not planes), the corresponding notations are: Note, for Laue–Bragg interferences Miller indices were introduced in 1839 by the British mineralogist William Hallowes Miller , although an almost identical system ( Weiss parameters ) had already been used by German mineralogist Christian Samuel Weiss since 1817. [ 2 ] The method was also historically known as the Millerian system, and the indices as Millerian, [ 3 ] although this is now rare. The Miller indices are defined with respect to any choice of unit cell and not only with respect to primitive basis vectors, as is sometimes stated. There are two equivalent ways to define the meaning of the Miller indices: [ 1 ] via a point in the reciprocal lattice , or as the inverse intercepts along the lattice vectors. Both definitions are given below. In either case, one needs to choose the three lattice vectors a 1 , a 2 , and a 3 that define the unit cell (note that the conventional unit cell may be larger than the primitive cell of the Bravais lattice , as the examples below illustrate). Given these, the three primitive reciprocal lattice vectors are also determined (denoted b 1 , b 2 , and b 3 ). Then, given the three Miller indices h , k , ℓ , ( h k ℓ ) {\displaystyle h,k,\ell ,(hk\ell )} denotes planes orthogonal to the reciprocal lattice vector: That is, ( hkℓ ) simply indicates a normal to the planes in the basis of the primitive reciprocal lattice vectors. Because the coordinates are integers, this normal is itself always a reciprocal lattice vector. The requirement of lowest terms means that it is the shortest reciprocal lattice vector in the given direction. Equivalently, ( hkℓ ) denotes a plane that intercepts the three points a 1 / h , a 2 / k , and a 3 / ℓ , or some multiple thereof. That is, the Miller indices are proportional to the inverses of the intercepts of the plane, in the basis of the lattice vectors. If one of the indices is zero, it means that the planes do not intersect that axis (the intercept is "at infinity"). Considering only ( hkℓ ) planes intersecting one or more lattice points (the lattice planes ), the perpendicular distance d between adjacent lattice planes is related to the (shortest) reciprocal lattice vector orthogonal to the planes by the formula: d = 2 π / | g h k ℓ | {\displaystyle d=2\pi /|\mathbf {g} _{hk\ell }|} . [ 1 ] The related notation [hkℓ] denotes the direction : That is, it uses the direct lattice basis instead of the reciprocal lattice. Note that [hkℓ] is not generally normal to the ( hkℓ ) planes, except in a cubic lattice as described below. For the special case of simple cubic crystals, the lattice vectors are orthogonal and of equal length (usually denoted a ), as are those of the reciprocal lattice. Thus, in this common case, the Miller indices ( hkℓ ) and [ hkℓ ] both simply denote normals/directions in Cartesian coordinates . For cubic crystals with lattice constant a , the spacing d between adjacent ( hkℓ ) lattice planes is (from above) Because of the symmetry of cubic crystals, it is possible to change the place and sign of the integers and have equivalent directions and planes: For face-centered cubic and body-centered cubic lattices, the primitive lattice vectors are not orthogonal. However, in these cases the Miller indices are conventionally defined relative to the lattice vectors of the cubic supercell and hence are again simply the Cartesian directions. With hexagonal and rhombohedral lattice systems , it is possible to use the Bravais–Miller system, which uses four indices ( h k i ℓ ) that obey the constraint Here h , k and ℓ are identical to the corresponding Miller indices, and i is a redundant index. This four-index scheme for labeling planes in a hexagonal lattice makes permutation symmetries apparent. For example, the similarity between (110) ≡ (11 2 0) and (1 2 0) ≡ (1 2 10) is more obvious when the redundant index is shown. In the figure at right, the (001) plane has a 3-fold symmetry: it remains unchanged by a rotation of 1/3 (2 π /3 rad, 120°). The [100], [010] and the [ 1 1 0] directions are really similar. If S is the intercept of the plane with the [ 1 1 0] axis, then There are also ad hoc schemes (e.g. in the transmission electron microscopy literature) for indexing hexagonal lattice vectors (rather than reciprocal lattice vectors or planes) with four indices. However they do not operate by similarly adding a redundant index to the regular three-index set. For example, the reciprocal lattice vector ( hkℓ ) as suggested above can be written in terms of reciprocal lattice vectors as h b 1 + k b 2 + ℓ b 3 {\displaystyle h\mathbf {b} _{1}+k\mathbf {b} _{2}+\ell \mathbf {b} _{3}} . For hexagonal crystals this may be expressed in terms of direct-lattice basis-vectors a 1 , a 2 and a 3 as Hence zone indices of the direction perpendicular to plane ( hkℓ ) are, in suitably normalized triplet form, simply [ 2 h + k , h + 2 k , ℓ ( 3 / 2 ) ( a / c ) 2 ] {\displaystyle [2h+k,h+2k,\ell (3/2)(a/c)^{2}]} . When four indices are used for the zone normal to plane ( hkℓ ), however, the literature often uses [ h , k , − h − k , ℓ ( 3 / 2 ) ( a / c ) 2 ] {\displaystyle [h,k,-h-k,\ell (3/2)(a/c)^{2}]} instead. [ 4 ] Thus as you can see, four-index zone indices in square or angle brackets sometimes mix a single direct-lattice index on the right with reciprocal-lattice indices (normally in round or curly brackets) on the left. And, note that for hexagonal interplanar distances, they take the form Crystallographic directions are lines linking nodes ( atoms , ions or molecules ) of a crystal. Similarly, crystallographic planes are planes linking nodes. Some directions and planes have a higher density of nodes; these dense planes have an influence on the behavior of the crystal: For all these reasons, it is important to determine the planes and thus to have a notation system. Ordinarily, Miller indices are always integers by definition, and this constraint is physically significant. To understand this, suppose that we allow a plane ( abc ) where the Miller "indices" a , b and c (defined as above) are not necessarily integers. If a , b and c have rational ratios, then the same family of planes can be written in terms of integer indices ( hkℓ ) by scaling a , b and c appropriately: divide by the largest of the three numbers, and then multiply by the least common denominator . Thus, integer Miller indices implicitly include indices with all rational ratios. The reason why planes where the components (in the reciprocal-lattice basis) have rational ratios are of special interest is that these are the lattice planes : they are the only planes whose intersections with the crystal are 2d-periodic. For a plane (abc) where a , b and c have irrational ratios, on the other hand, the intersection of the plane with the crystal is not periodic. It forms an aperiodic pattern known as a quasicrystal . This construction corresponds precisely to the standard "cut-and-project" method of defining a quasicrystal, using a plane with irrational-ratio Miller indices. (Although many quasicrystals, such as the Penrose tiling , are formed by "cuts" of periodic lattices in more than three dimensions, involving the intersection of more than one such hyperplane .)
https://en.wikipedia.org/wiki/Miller_index
The Miller process is an industrial-scale chemical procedure used to refine gold to a high degree of purity (99.5%). It was patented by Francis Bowyer Miller in 1867. This chemical process involves blowing chlorine gas through molten, but (slightly) impure, gold. Nearly all metal contaminants react to form chlorides but gold does not at these high temperatures. The other metals volatilize or form a low density slag on top of the molten gold. [ 1 ] [ 2 ] [ 3 ] When all impurities have been removed from the gold (observable by a change in flame color) the gold is removed and processed in the manner required for sale or use. The resulting gold is 99.5% pure, but of lower purity than gold produced by the other common refining method, the Wohlwill process , which produces gold of up to 99.999% purity. [ 1 ] [ 2 ] The Wohlwill process is commonly used for producing high-purity gold, such as in electronics work, where exacting standards of purity are required. When highest purity gold is not required, refiners use the Miller process due to its relative ease, quicker turnaround times, and because it does not tie up the large amount of gold in the form of chloroauric acid which the Wohlwill process permanently requires for the electrolyte . [ 1 ] [ 2 ]
https://en.wikipedia.org/wiki/Miller_process
The Miller–Urey experiment , [ 1 ] or Miller experiment , [ 2 ] was an experiment in chemical synthesis carried out in 1952 that simulated the conditions thought at the time to be present in the atmosphere of the early, prebiotic Earth . It is seen as one of the first successful experiments demonstrating the synthesis of organic compounds from inorganic constituents in an origin of life scenario. The experiment used methane (CH 4 ), ammonia (NH 3 ), hydrogen (H 2 ), in ratio 2:1:2, and water (H 2 O). Applying an electric arc (simulating lightning) resulted in the production of amino acids . It is regarded as a groundbreaking experiment, and the classic experiment investigating the origin of life ( abiogenesis ). It was performed in 1952 by Stanley Miller , supervised by Nobel laureate Harold Urey at the University of Chicago , and published the following year. At the time, it supported Alexander Oparin 's and J. B. S. Haldane 's hypothesis that the conditions on the primitive Earth favored chemical reactions that synthesized complex organic compounds from simpler inorganic precursors. [ 3 ] [ 4 ] [ 5 ] After Miller's death in 2007, scientists examining sealed vials preserved from the original experiments were able to show that more amino acids were produced in the original experiment than Miller was able to report with paper chromatography . [ 6 ] While evidence suggests that Earth's prebiotic atmosphere might have typically had a composition different from the gas used in the Miller experiment, prebiotic experiments continue to produce racemic mixtures of simple-to-complex organic compounds, including amino acids, under varying conditions. [ 7 ] Moreover, researchers have shown that transient, hydrogen-rich atmospheres – conducive to Miller-Urey synthesis – would have occurred after large asteroid impacts on early Earth. [ 8 ] [ 9 ] Until the 19th century, there was considerable acceptance of the theory of spontaneous generation , the idea that "lower" animals, such as insects or rodents, arose from decaying matter. [ 10 ] However, several experiments in the 19th century – particularly Louis Pasteur 's swan neck flask experiment in 1859 [ 11 ] — disproved the theory that life arose from decaying matter. Charles Darwin published On the Origin of Species that same year, describing the mechanism of biological evolution . [ 12 ] While Darwin never publicly wrote about the first organism in his theory of evolution, in a letter to Joseph Dalton Hooker , he speculated: But if (and oh what a big if) we could conceive in some warm little pond with all sorts of ammonia and phosphoric salts, light, heat, electricity etcetera present, that a protein compound was chemically formed, ready to undergo still more complex changes [...]" [ 13 ] At this point, it was known that organic molecules could be formed from inorganic starting materials, as Friedrich Wöhler had described Wöhler synthesis of urea from ammonium cyanate in 1828. [ 14 ] Several other early seminal works in the field of organic synthesis followed, including Alexander Butlerov 's synthesis of sugars from formaldehyde and Adolph Strecker 's synthesis of the amino acid alanine from acetaldehyde , ammonia , and hydrogen cyanide . [ 15 ] In 1913, Walther Löb synthesized amino acids by exposing formamide to silent electric discharge , [ 16 ] so scientists were beginning to produce the building blocks of life from simpler molecules, but these were not intended to simulate any prebiotic scheme or even considered relevant to origin of life questions. [ 15 ] But the scientific literature of the early 20th century contained speculations on the origin of life. [ 15 ] [ 17 ] In 1903, physicist Svante Arrhenius hypothesized that the first microscopic forms of life, driven by the radiation pressure of stars, could have arrived on Earth from space in the panspermia hypothesis. [ 18 ] In the 1920s, Leonard Troland wrote about a primordial enzyme that could have formed by chance in the primitive ocean and catalyzed reactions, and Hermann J. Muller suggested that the formation of a gene with catalytic and autoreplicative properties could have set evolution in motion. [ 19 ] Around the same time, Alexander Oparin's and J. B. S. Haldane's " Primordial soup " ideas were emerging, which hypothesized that a chemically-reducing atmosphere on early Earth would have been conducive to organic synthesis in the presence of sunlight or lightning, gradually concentrating the ocean with random organic molecules until life emerged. [ 20 ] In this way, frameworks for the origin of life were coming together, but at the mid-20th century, hypotheses lacked direct experimental evidence. At the time of the Miller–Urey experiment, Harold Urey was a Professor of Chemistry at the University of Chicago who had a well-renowned career, including receiving the Nobel Prize in Chemistry in 1934 for his isolation of deuterium [ 21 ] and leading efforts to use gaseous diffusion for uranium isotope enrichment in support of the Manhattan Project . [ 22 ] In 1952, Urey postulated that the high temperatures and energies associated with large impacts in Earth's early history would have provided an atmosphere of methane (CH 4 ), water (H 2 O), ammonia (NH 3 ), and hydrogen (H 2 ), creating the reducing environment necessary for the Oparin-Haldane "primordial soup" scenario. [ 23 ] Stanley Miller arrived at the University of Chicago in 1951 to pursue a PhD under nuclear physicist Edward Teller , another prominent figure in the Manhattan Project. [ 24 ] Miller began to work on how different chemical elements were formed in the early universe, but, after a year of minimal progress, Teller was to leave for California to establish Lawrence Livermore National Laboratory and further nuclear weapons research. [ 24 ] Miller, having seen Urey lecture on his 1952 paper, approached him about the possibility of a prebiotic synthesis experiment. While Urey initially discouraged Miller, he agreed to allow Miller to try for a year. [ 24 ] By February 1953, Miller had mailed a manuscript as sole author reporting the results of his experiment to Science . [ 25 ] Urey refused to be listed on the manuscript because he believed his status would cause others to underappreciate Miller's role in designing and conducting the experiment and so encouraged Miller to take full credit for the work. Despite this the set-up is still most commonly referred to including both their names. [ 25 ] [ 26 ] After not hearing from Science for a few weeks, a furious Urey wrote to the editorial board demanding an answer, stating, "If Science does not wish to publish this promptly we will send it to the Journal of the American Chemical Society ." [ 25 ] Miller's manuscript was eventually published in Science in May 1953. [ 25 ] In the original 1952 experiment, methane (CH 4 ), ammonia (NH 3 ), and hydrogen (H 2 ) were all sealed together in a 2:2:1 ratio (1 part H 2 ) inside a sterile 5-L glass flask connected to a 500-mL flask half-full of water (H 2 O). The gas chamber was intended to represent Earth's prebiotic atmosphere , while the water simulated an ocean. The water in the smaller flask was boiled such that water vapor entered the gas chamber and mixed with the "atmosphere". A continuous electrical spark was discharged between a pair of electrodes in the larger flask. The spark passed through the mixture of gases and water vapor, simulating lightning. A condenser below the gas chamber allowed aqueous solution to accumulate into a U-shaped trap at the bottom of the apparatus, which was sampled. After a day, the solution that had collected at the trap was pink, and after a week of continuous operation the solution was deep red and turbid , which Miller attributed to organic matter adsorbed onto colloidal silica . [ 3 ] The boiling flask was then removed, and mercuric chloride (a poison) was added to prevent microbial contamination. The reaction was stopped by adding barium hydroxide and sulfuric acid , and evaporated to remove impurities. Using paper chromatography , Miller identified five amino acids present in the solution: glycine , α-alanine and β-alanine were positively identified, while aspartic acid and α-aminobutyric acid (AABA) were less certain, due to the spots being faint. [ 3 ] Materials and samples from the original experiments remained in 2017 under the care of Miller's former student, Jeffrey Bada , a professor at the UCSD , Scripps Institution of Oceanography who also conducts origin of life research. [ 27 ] As of 2013 [update] , the apparatus used to conduct the experiment was on display at the Denver Museum of Nature and Science . [ 28 ] In 1957 Miller published research describing the chemical processes occurring inside his experiment. [ 29 ] Hydrogen cyanide (HCN) and aldehydes (e.g., formaldehyde) were demonstrated to form as intermediates early on in the experiment due to the electric discharge. [ 29 ] This agrees with current understanding of atmospheric chemistry , as HCN can generally be produced from reactive radical species in the atmosphere that arise when CH 4 and nitrogen break apart under ultraviolet (UV) light . [ 30 ] Similarly, aldehydes can be generated in the atmosphere from radicals resulting from CH 4 and H 2 O decomposition and other intermediates like methanol . [ 31 ] Several energy sources in planetary atmospheres can induce these dissociation reactions and subsequent hydrogen cyanide or aldehyde formation, including lightning, [ 32 ] ultraviolet light, [ 30 ] and galactic cosmic rays . [ 33 ] For example, here is a set photochemical reactions of species in the Miller-Urey atmosphere that can result in formaldehyde: [ 31 ] A photochemical path to HCN from NH 3 and CH 4 is: [ 39 ] Other active intermediate compounds ( acetylene , cyanoacetylene , etc.) have been detected in the aqueous solution of Miller–Urey-type experiments, [ 40 ] but the immediate HCN and aldehyde production, the production of amino acids accompanying the plateau in HCN and aldehyde concentrations, and slowing of amino acid production rate during HCN and aldehyde depletion provided strong evidence that Strecker amino acid synthesis was occurring in the aqueous solution. [ 29 ] Strecker synthesis describes the reaction of an aldehyde, ammonia, and HCN to a simple amino acid through an aminoacetonitrile intermediate: Furthermore, water and formaldehyde can react via Butlerov's reaction to produce various sugars like ribose . [ 41 ] The experiments showed that simple organic compounds, including the building blocks of proteins and other macromolecules, can abiotically be formed from gases with the addition of energy. There were a few similar spark discharge experiments contemporaneous with Miller-Urey. An article in The New York Times (March 8, 1953) titled "Looking Back Two Billion Years" describes the work of Wollman M. MacNevin at Ohio State University , before the Miller Science paper was published in May 1953. MacNevin was passing 100,000V sparks through methane and water vapor and produced " resinous solids" that were "too complex for analysis." [ 25 ] [ 42 ] [ 43 ] Furthermore, K. A. Wilde submitted a manuscript to Science on December 15, 1952, before Miller submitted his paper to the same journal in February 1953. Wilde's work, published on July 10, 1953, used voltages up to only 600V on a binary mixture of carbon dioxide (CO 2 ) and water in a flow system and did not note any significant reduction products. [ 44 ] According to some, the reports of these experiments explain why Urey was rushing Miller's manuscript through Science and threatening to submit to the Journal of the American Chemical Society. [ 25 ] By introducing an experimental framework to test prebiotic chemistry, the Miller–Urey experiment paved the way for future origin of life research. [ 45 ] In 1961, Joan Oró produced milligrams of the nucleobase adenine from a concentrated solution of HCN and NH 3 in water. [ 46 ] Oró found that several amino acids were also formed from HCN and ammonia under those conditions. [ 47 ] Experiments conducted later showed that the other RNA and DNA nucleobases could be obtained through simulated prebiotic chemistry with a reducing atmosphere . [ 48 ] [ 49 ] Other researchers also began using UV - photolysis in prebiotic schemes, as the UV flux would have been much higher on early Earth. [ 50 ] For example, UV-photolysis of water vapor with carbon monoxide was found to yield various alcohols , aldehydes, and organic acids . [ 51 ] In the 1970s, Carl Sagan used Miller-Urey-type reactions to synthesize and experiment with complex organic particles dubbed " tholins ", which likely resemble particles formed in hazy atmospheres like that of Titan . [ 52 ] Much work has been done since the 1950s toward understanding how Miller-Urey chemistry behaves in various environmental settings. In 1983, testing different atmospheric compositions, Miller and another researcher repeated experiments with varying proportions of H 2 , H 2 O, N 2 , CO 2 or CH 4 , and sometimes NH 3 . [ 53 ] They found that the presence or absence of NH 3 in the mixture did not significantly impact amino acid yield, as NH 3 was generated from N 2 during the spark discharge. [ 53 ] Additionally, CH 4 proved to be one of the most important atmospheric ingredients for high yields, likely due to its role in HCN formation. [ 53 ] Much lower yields were obtained with more oxidized carbon species in place of CH 4 , but similar yields could be reached with a high H 2 /CO 2 ratio. [ 53 ] Thus, Miller-Urey reactions work in atmospheres of other compositions as well, depending on the ratio of reducing and oxidizing gases. More recently, Jeffrey Bada and H. James Cleaves, graduate students of Miller, hypothesized that the production of nitrites, which destroy amino acids, in CO 2 and N 2 -rich atmospheres may explain low amino acids yields. [ 54 ] In a Miller-Urey setup with a less-reducing (CO 2 + N 2 + H 2 O) atmosphere, when they added calcium carbonate to buffer the aqueous solution and ascorbic acid to inhibit oxidation, yields of amino acids greatly increased, demonstrating that amino acids can still be formed in more neutral atmospheres under the right geochemical conditions. [ 54 ] In a prebiotic context, they argued that seawater would likely still be buffered and ferrous iron could inhibit oxidation. [ 54 ] In 1999, after Miller suffered a stroke, he donated the contents of his laboratory to Bada. [ 27 ] In an old cardboard box, Bada discovered unanalyzed samples from modified experiments that Miller had conducted in the 1950s. [ 27 ] In a " volcanic " apparatus, Miller had amended an aspirating nozzle to shoot a jet of steam into the reaction chamber. [ 7 ] [ 55 ] Using high-performance liquid chromatography and mass spectrometry , Bada's lab analyzed old samples from a set of experiments Miller conducted with this apparatus and found some higher yields and a more diverse suite of amino acids. [ 7 ] [ 55 ] Bada speculated that injecting the steam into the spark could have split water into H and OH radicals, leading to more hydroxylated amino acids during Strecker synthesis. [ 7 ] [ 55 ] In a separate set of experiments, Miller added hydrogen sulfide (H 2 S) to the reducing atmosphere, and Bada's analyses of the products suggested order-of-magnitude higher yields, including some amino acids with sulfur moieties . [ 7 ] [ 56 ] A 2021 work highlighted the importance of the high-energy free electrons present in the experiment. It is these electrons that produce ions and radicals, and represent an aspect of the experiment that needs to be better understood. [ 57 ] After comparing Miller–Urey experiments conducted in borosilicate glassware with those conducted in Teflon apparatuses, a 2021 paper suggests that the glass reaction vessel acts as a mineral catalyst , implicating silicate rocks as important surfaces in prebiotic Miller-Urey reactions. [ 58 ] While there is a lack of geochemical observations to constrain the exact composition of the prebiotic atmosphere, recent models point to an early "weakly reducing" atmosphere; that is, early Earth's atmosphere was likely dominated by CO 2 and N 2 and not CH 4 and NH 3 as used in the original Miller–Urey experiment. [ 59 ] [ 60 ] This is explained, in part, by the chemical composition of volcanic outgassing. Geologist William Rubey was one of the first to compile data on gases emitted from modern volcanoes and concluded that they are rich in CO 2 , H 2 O, and likely N 2 , with varying amounts of H 2 , sulfur dioxide (SO 2 ), and H 2 S. [ 60 ] [ 61 ] Therefore, if the redox state of Earth's mantle — which dictates the composition of outgassing – has been constant since formation , then the atmosphere of early Earth was likely weakly reducing, but there are some arguments for a more-reducing atmosphere for the first few hundred million years. [ 60 ] While the prebiotic atmosphere could have had a different redox condition than that of the Miller–Urey atmosphere, the modified Miller–Urey experiments described in the above section demonstrated that amino acids can still be abiotically produced in less-reducing atmospheres under specific geochemical conditions. [ 7 ] [ 53 ] [ 54 ] Furthermore, harkening back to Urey's original hypothesis of a " post-impact " reducing atmosphere, [ 23 ] a recent atmospheric modeling study has shown that an iron-rich impactor with a minimum mass around 4×10 20 – 5×10 21 kg would be enough to transiently reduce the entire prebiotic atmosphere, resulting in a Miller-Urey-esque H 2 -, CH 4 -, and NH 3 -dominated atmosphere that persists for millions of years. [ 9 ] Previous work has estimated from the lunar cratering record and composition of Earth's mantle that between four and seven such impactors reached the Hadean Earth. [ 8 ] [ 9 ] [ 62 ] A large factor controlling the redox budget of early Earth's atmosphere is the rate of atmospheric escape of H 2 after Earth's formation. Atmospheric escape – common to young, rocky planets — occurs when gases in the atmosphere have sufficient kinetic energy to overcome gravitational energy . [ 63 ] It is generally accepted that the timescale of hydrogen escape is short enough such that H 2 made up < 1% of the atmosphere of prebiotic Earth, [ 60 ] but, in 2005, a hydrodynamic model of hydrogen escape predicted escape rates two orders of magnitude lower than previously thought, maintaining a hydrogen mixing ratio of 30%. [ 64 ] A hydrogen-rich prebiotic atmosphere would have large implications for Miller-Urey synthesis in the Hadean and Archean , but later work suggests solutions in that model might have violated conservation of mass and energy. [ 63 ] [ 65 ] That said, during hydrodynamic escape, lighter molecules like hydrogen can "drag" heavier molecules with them through collisions, and recent modeling of xenon escape has pointed to a hydrogen atmospheric mixing ratio of at least 1% or higher at times during the Archean. [ 66 ] Taken together, the view that early Earth's atmosphere was weakly reducing, with transient instances of highly-reducing compositions following large impacts is generally supported. [ 9 ] [ 23 ] [ 60 ] Conditions similar to those of the Miller–Urey experiments are present in other regions of the Solar System , often substituting ultraviolet light for lightning as the energy source for chemical reactions. [ 67 ] [ 68 ] [ 69 ] The Murchison meteorite that fell near Murchison, Victoria , Australia in 1969 was found to contain an amino acid distribution remarkably similar to Miller-Urey discharge products. [ 27 ] Analysis of the organic fraction of the Murchison meteorite with Fourier-transform ion cyclotron resonance mass spectrometry detected over 10,000 unique compounds, [ 70 ] albeit at very low ( ppb – ppm ) concentrations. [ 71 ] [ 72 ] In this way, the organic composition of the Murchison meteorite is seen as evidence of Miller-Urey synthesis outside Earth. Comets and other icy outer-solar-system bodies are thought to contain large amounts of complex carbon compounds (such as tholins) formed by processes akin to Miller-Urey setups, darkening surfaces of these bodies. [ 52 ] [ 73 ] Some argue that comets bombarding the early Earth could have provided a large supply of complex organic molecules along with the water and other volatiles, [ 74 ] [ 75 ] however very low concentrations of biologically-relevant material combined with uncertainty surrounding the survival of organic matter upon impact make this difficult to determine. [ 15 ] The Miller–Urey experiment was proof that the building blocks of life could be synthesized abiotically from gases, and introduced a new prebiotic chemistry framework through which to study the origin of life. Simulations of protein sequences present in the last universal common ancestor (LUCA), or the last shared ancestor of all extant species today, show an enrichment in simple amino acids that were available in the prebiotic environment according to Miller-Urey chemistry. This suggests that the genetic code from which all life evolved was rooted in a smaller suite of amino acids than those used today. [ 76 ] Thus, while creationist arguments focus on the fact that Miller–Urey experiments have not generated all 22 genetically-encoded amino acids , [ 77 ] this does not actually conflict with the evolutionary perspective on the origin of life. [ 76 ] Another common criticism is that the racemic (containing both L and D enantiomers ) mixture of amino acids produced in a Miller–Urey experiment is not exemplary of abiogenesis theories, [ 77 ] as life on Earth today uses almost exclusively L-amino acids. [ 79 ] While it is true that Miller-Urey setups produce racemic mixtures, [ 80 ] the origin of homochirality is a separate area in origin of life research. [ 81 ] Recent work demonstrates that magnetic mineral surfaces like magnetite can be templates for the enantioselective crystallization of chiral molecules, including RNA precursors , due to the chiral-induced spin selectivity (CISS) effect. [ 82 ] [ 83 ] Once an enantioselective bias is introduced, homochirality can then propagate through biological systems in various ways. [ 84 ] In this way, enantioselective synthesis is not required of Miller-Urey reactions if other geochemical processes in the environment are introducing homochirality. Finally, Miller-Urey and similar experiments primarily deal with the synthesis of monomers ; polymerization of these building blocks to form peptides and other more complex structures is the next step of prebiotic chemistry schemes. [ 85 ] Polymerization requires condensation reactions , which are thermodynamically unfavored in aqueous solutions because they expel water molecules. [ 86 ] Scientists as far back as John Desmond Bernal in the late 1940s thus speculated that clay surfaces would play a large role in abiogenesis, as they might concentrate monomers. [ 87 ] Several such models for mineral-mediated polymerization have emerged, such as the interlayers of layered double hydroxides like green rust over wet-dry cycles. [ 88 ] Some scenarios for peptide formation have been proposed that are even compatible with aqueous solutions, such as the hydrophobic air-water interface [ 86 ] and a novel " sulfide -mediated α-aminonitrile ligation" scheme, where amino acid precursors come together to form peptides. [ 89 ] Polymerization of life's building blocks is an active area of research in prebiotic chemistry. Below is a table of amino acids produced and identified in the "classic" 1952 experiment, as analyzed by Miller in 1952 [ 3 ] and more recently by Bada and collaborators with modern mass spectrometry, [ 7 ] the 2008 re-analysis of vials from the volcanic spark discharge experiment, [ 7 ] [ 55 ] and the 2010 re-analysis of vials from the H 2 S-rich spark discharge experiment. [ 7 ] [ 56 ] While not all proteinogenic amino acids have been produced in spark discharge experiments, it is generally accepted that early life used a simpler set of prebiotically-available amino acids. [ 76 ]
https://en.wikipedia.org/wiki/Miller–Urey_experiment
Millieme is a French word meaning one thousandth of something. In English it may refer to:
https://en.wikipedia.org/wiki/Millieme
In mathematics , Milliken's tree theorem in combinatorics is a partition theorem generalizing Ramsey's theorem to infinite trees , objects with more structure than sets . Let T be a finitely splitting rooted tree of height ω, n a positive integer, and S T n {\displaystyle \mathbb {S} _{T}^{n}} the collection of all strongly embedded subtrees of T of height n. In one of its simple forms, Milliken's tree theorem states that if S T n = C 1 ∪ . . . ∪ C r {\displaystyle \mathbb {S} _{T}^{n}=C_{1}\cup ...\cup C_{r}} then for some strongly embedded infinite subtree R of T, S R n ⊂ C i {\displaystyle \mathbb {S} _{R}^{n}\subset C_{i}} for some i ≤ r. This immediately implies Ramsey's theorem ; take the tree T to be a linear ordering on ω vertices. Define S n = ⋃ T S T n {\displaystyle \mathbb {S} ^{n}=\bigcup _{T}\mathbb {S} _{T}^{n}} where T ranges over finitely splitting rooted trees of height ω. Milliken's tree theorem says that not only is S n {\displaystyle \mathbb {S} ^{n}} partition regular for each n < ω, but that the homogeneous subtree R guaranteed by the theorem is strongly embedded in T. Call T an α-tree if each branch of T has cardinality α. Define Succ(p, P)= { q ∈ P : q ≥ p } {\displaystyle \{q\in P:q\geq p\}} , and I S ( p , P ) {\displaystyle IS(p,P)} to be the set of immediate successors of p in P. Suppose S is an α-tree and T is a β-tree, with 0 ≤ α ≤ β ≤ ω. S is strongly embedded in T if: Intuitively, for S to be strongly embedded in T,
https://en.wikipedia.org/wiki/Milliken's_tree_theorem
In mathematics , the Milliken–Taylor theorem in combinatorics is a generalization of both Ramsey's theorem and Hindman's theorem . It is named after Keith Milliken and Alan D. Taylor . Let P f ( N ) {\displaystyle {\mathcal {P}}_{f}(\mathbb {N} )} denote the set of finite subsets of N {\displaystyle \mathbb {N} } , and define a partial order on P f ( N ) {\displaystyle {\mathcal {P}}_{f}(\mathbb {N} )} by α<β if and only if max α<min β. Given a sequence of integers ⟨ a n ⟩ n = 0 ∞ ⊂ N {\displaystyle \langle a_{n}\rangle _{n=0}^{\infty }\subset \mathbb {N} } and k > 0 , let Let [ S ] k {\displaystyle [S]^{k}} denote the k -element subsets of a set S . The Milliken–Taylor theorem says that for any finite partition [ N ] k = C 1 ∪ C 2 ∪ ⋯ ∪ C r {\displaystyle [\mathbb {N} ]^{k}=C_{1}\cup C_{2}\cup \cdots \cup C_{r}} , there exist some i ≤ r and a sequence ⟨ a n ⟩ n = 0 ∞ ⊂ N {\displaystyle \langle a_{n}\rangle _{n=0}^{\infty }\subset \mathbb {N} } such that [ F S ( ⟨ a n ⟩ n = 0 ∞ ) ] < k ⊂ C i {\displaystyle [FS(\langle a_{n}\rangle _{n=0}^{\infty })]_{<}^{k}\subset C_{i}} . For each ⟨ a n ⟩ n = 0 ∞ ⊂ N {\displaystyle \langle a_{n}\rangle _{n=0}^{\infty }\subset \mathbb {N} } , call [ F S ( ⟨ a n ⟩ n = 0 ∞ ) ] < k {\displaystyle [FS(\langle a_{n}\rangle _{n=0}^{\infty })]_{<}^{k}} an MT k set . Then, alternatively, the Milliken–Taylor theorem asserts that the collection of MT k sets is partition regular for each k . This combinatorics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Milliken–Taylor_theorem
A millimeter wave scanner is a whole-body imaging device used for detecting objects concealed underneath a person’s clothing using a form of electromagnetic radiation . Typical uses for this technology include detection of items for commercial loss prevention , smuggling , and screening for weapons at government buildings and airport security checkpoints. It is one of the common technologies of full body scanner used for body imaging; a competing technology is backscatter X-ray . Millimeter wave scanners come in two varieties: active and passive. Active scanners direct millimeter wave energy at the subject and then interpret the reflected energy. Passive systems create images using only ambient radiation and radiation emitted from the human body or objects. [ 1 ] [ 2 ] [ 3 ] In active scanners, the millimeter wave (a type of microwave ) is transmitted from two antennas simultaneously as they rotate around the body. The wave energy reflected back from the body or other objects on the body is used to construct a three-dimensional image, which is displayed on a remote monitor for analysis. [ 4 ] [ 1 ] [ non-primary source needed ] [ 2 ] [ 5 ] The first millimeter-wave full body scanner was developed at the Pacific Northwest National Laboratory (PNNL) in Richland, Washington . The operation is one of the eight national laboratories Battelle manages for the U.S. Department of Energy . In the 1990s, they patented their 3-D holographic-imagery technology, with research and development support provided by the TSA and the Federal Aviation Administration (FAA). [ 6 ] In 2002, Silicon Valley startup SafeView, Inc. obtained an exclusive license to PNNL's (background) intellectual property, to commercialize their technology. [ 7 ] From 2002 to 2006, SafeView developed a production-ready millimeter body scanner system, and software which included scanner control, algorithms for threat detection and object recognition, as well as techniques to conceal raw images in order to resolve privacy concerns. During this time, SafeView developed foreground IP through several patent applications. By 2006, SafeView's body scanning portals had been installed and trialed at various locations around the globe. They were installed at border crossings in Israel, international airports such as Mexico City and Amsterdam's Schiphol, ferry landings in Singapore, railway stations in the UK, government buildings like The Hague, and commercial buildings in Tokyo. They were also employed to secure soldiers and workers in Iraq's Green Zone. In 2006, SafeView was acquired by L-3 Communications . [ 8 ] [ 9 ] From 2006 and 2020, L-3 Communications (later L3Harris ) continued to make incremental enhancements to their scanner systems, while deploying thousands of units world wide. In 2020, Leidos acquired L3Harris Security Detection and Automation businesses, which included their body scanner business unit. [ 10 ] Historically, privacy advocates were concerned about the use of full body scanning technology because it used to display a detailed image of the surface of the skin under clothing, prosthetics including breast prostheses , and other medical equipment normally hidden, such as colostomy bags . [ 11 ] These privacy advocates called the images "virtual strip searches". [ 12 ] However, in 2013 the U.S. Congress prohibited the display of detailed images and required the display of metal and other objects on a generic body outline instead of the person's actual skin. Such generic body outlines can be made by Automatic Target Recognition (ATR) software. As of June 1, 2013, all back-scatter full body scanners were removed from use at U.S. airports, because they could not comply with TSA's software requirements. Millimeter-wave full body scanners utilize ATR, and are compliant with TSA software requirements. [ 12 ] Software imaging technology can also mask specific body parts. [ 5 ] Proposed remedies for privacy concerns include scanning only people who are independently detected to be carrying contraband , or developing technology to mask genitals and other private parts. In some locations, travelers have the choice between the body scan or a " patdown ". In Australia, the scans are mandatory; [ 13 ] in the UK, however, passengers may opt out of being scanned. [ 14 ] In this case, the individual must be screened by an alternative method which includes at least an enhanced hand search in private as set out on the UK government website. In the United States , the Transportation Security Administration (TSA) claimed to have taken steps to address privacy objections. The TSA claimed that the images captured by the machines were not stored. On the other hand, the U.S. Marshals Service admitted that it had saved thousands of images captured from a Florida checkpoint. [ 15 ] The officer sitting at the machine does not see the image; rather that screen shows only whether the viewing officer has confirmed that the passenger has cleared. Conversely, the officer who views the image does not see the person being scanned by the device. [ 16 ] In some locations, updated software has removed the necessity of a separate officer in a remote location. These units now generate a generic image of a person, with specific areas of suspicion highlighted by boxes. If no suspicious items are detected by the machine, a green screen instead appears indicating the passenger is cleared. Concerns remain about alternative ways to capture and disseminate the image. Additionally, the protective steps often do not entirely address the underlying privacy concerns. Subjects may object to anyone viewing them in a state of effective undress, even if it is not the agent next to the machine or the image is not retrievable. Reports of full-body scanner images being improperly and perhaps illegally saved and disseminated have emerged. [ 17 ] Millimeter wavelength radiation is a subset of the microwave radio frequency spectrum. Even at its high-energy end, it is still more than 3 orders of magnitude lower in energy than its nearest radiotoxic neighbour ( ultraviolet ) in the electromagnetic spectrum . As such, millimeter wave radiation is non-ionizing and incapable of causing cancers by radiolytic DNA bond cleavage . Due to the shallow penetration depth of millimeter waves into tissue (typically less than 1 mm), [ 18 ] acute biological effects of irradiation are localized in epidermal and dermal layers and manifest primarily as thermal effects . [ 18 ] [ 19 ] [ 20 ] [ 21 ] There is no clear evidence to date of harmful effects other than those caused by localised heating and ensuing chemical changes (expression of heat shock proteins , denaturation , proteolysis , and inflammatory response , see also mobile phone radiation and health ). The energy density required to produce thermal injury in skin is much higher than that typically delivered in an active millimeter wave scanner. [ 19 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] [ 26 ] The fragmented or misfolded molecules resulting from thermal injury may be delivered to neighbouring cells through diffusion and into the systemic circulation through perfusion . Increased skin permeability under irradiation exacerbates this possibility. [ 21 ] It is therefore plausible that the molecular products of thermal injury (and their distribution to areas remote from the site of irradiation) could cause secondary injury. Note that this would be no different from the effects of a thermal injury sustained in a more conventional fashion. Due to the increasing ubiquity of millimeter wave radiation (see WiGig ), research into its potential biological effects is ongoing. [ 20 ] [ 22 ] [ 26 ] Independent of thermal injury, a 2009 study funded by National Institutes of Health , conducted by U.S. Department of Energy's Los Alamos National Laboratories Theoretical Division and Center for Nonlinear Studies and Harvard University Medical School found that terahertz range radiation creates changes in DNA breathing dynamics, creating apparent interference with the naturally occurring local strand separation dynamics of double-stranded DNA and consequently, with DNA function. [ 27 ] The same article was referenced by MIT Technology Journal article on October 30, 2009. Millimeter wave scanners should not be confused with backscatter X-ray scanners, a completely different technology used for similar purposes at airports. X-rays are ionizing radiation , more energetic than millimeter waves by more than five orders of magnitude , and raise concerns about possible mutagenic potential. The efficacy of millimeter wave scanners in detecting threatening objects has been questioned. Formal studies demonstrated the relative inability of these scanners to detect objects—dangerous or not—on the person being scanned. [ 28 ] Additionally, some studies suggested that the cost–benefit ratios of these scanners is poor. [ 29 ] As of January 2011, there had been no report of a terrorist capture as a result of a body scanner. In a series of repeated tests, the body scanners were not able to detect a handgun hidden in an undercover agent's undergarments, but the agents responsible for monitoring the body scanners were deemed at fault for not recognizing the concealed weapon. [ 30 ] Millimeter wave scanners also have problems reading through sweat, in addition to yielding false positives from buttons and folds in clothing. [ 31 ] Some countries, such as Germany, have reported a false-positive rate of 54%. [ 32 ] While airport security may be the most visible and public use of body scanners, companies have opted to deploy passive employee screening to help reduce inventory shrink from key distribution centers. [ 33 ] [ 34 ] [ 35 ] The UK Border Agency (the predecessor of UK Visas and Immigration ) initiated use of passive screening technology to detect illicit goods. [ 36 ] As of April 2009, the U.S. Transportation Security Administration began deploying scanners at airports, e.g., at the Los Angeles International Airport ( LAX ). [ 5 ] These machines have also been deployed in the Jersey City PATH train system . [ 37 ] They have also been deployed at San Francisco International airport ( SFO ), as well as Salt Lake International Airport ( SLC ), Indianapolis International Airport ( IND ), Detroit-Wayne County Metropolitan Airport ( DTW ), Minneapolis-St. Paul International Airport ( MSP ), and Las Vegas International Airport ( LAS ). Three security scanners using millimeter waves were put into use at Schiphol Airport in Amsterdam on 15 May 2007, with more expected to be installed later. The passenger's head is masked from the view of the security personnel. Passive scanners are also currently in use at Fiumicino Airport , Italy . [ 38 ] They will next be deployed in Malpensa Airport . [ 39 ] The federal courthouse in Orlando, Florida employs passive screening devices capable of recording and storing images. [ 40 ] [ citation needed ] In 2008, the Canadian Air Transport Security Authority held a trial of the scanners at Kelowna International Airport in Kelowna , British Columbia . [ 41 ] Before the trial, the Office of the Privacy Commissioner of Canada (OPCC) reviewed a preliminary Privacy Impact Assessment and CATSA accepted recommendations from the OPCC. [ 42 ] In October 2009, an Assistant Privacy Commissioner, Chantal Bernier, announced that the OPCC had tested the scanning procedure, and the privacy safeguards that CATSA had agreed to would “meet the test for the proper reconciliation of public safety and privacy”. [ 43 ] In January 2010, Transport Canada confirmed that 44 scanners had been ordered, to be used in secondary screening at eight Canadian airports. [ 44 ] The announcement resulted in controversies over privacy, effectiveness and whether the exemption for those under 18 would be too large a loophole. [ 45 ] [ 46 ] [ 47 ] Scanners are currently used in Saskatoon ( YXE ), Toronto ( YYZ ), Montréal ( YUL ), Quebec ( YQB ), Calgary ( YYC ), Edmonton ( YEG ), Vancouver ( YVR ), Halifax ( YHZ ), and Winnipeg (YWG). Ninoy Aquino International Airport in Manila installed body scanners from Smiths in all four airport terminals in 2015. [ 48 ] The scanners are not yet in use, and are controversial among some airport security screeners. [ 49 ] Scanners can be used for 3D physical measurement of body shape for applications such as apparel design, prosthetic devices design, ergonomics, entertainment and gaming.
https://en.wikipedia.org/wiki/Millimeter_wave_scanner
A million service units (MSU) is a measurement of the amount of processing work a computer can perform in one hour. The term is most commonly associated with IBM mainframes . It reflects how IBM rates the machine in terms of charging capacity. The technical measure of processing power on IBM mainframes , however, are Service Units per second (or SU/sec). One “service unit” originally related to an actual hardware performance measurement (a specific model's instruction performance). However, that relationship disappeared many years ago as hardware and software evolved. MSUs are now like other common (but physically imprecise) measurements, such as “cans of coffee” or “tubes of toothpaste.” (Cans and tubes can vary in physical size depending on brand, market, and other factors. Some coffee cans contain 500 grams and others 13 ounces, for example.) Most mainframe software vendors use a licensing and pricing model in which the customers are charged per MSU consumed (i.e. the number of operations the software has performed) in addition to hardware and software installation costs. [ 1 ] Others charge by total MSU system capacity. Thus, while MSU is an artificial construction, it does have a direct financial implication. In fact, software charges are why the MSU measurement exists at all. IBM publishes MSU ratings for every mainframe server model, including the zSeries and System z9 ranges. For example, a zSeries z890 Model 110 is a 4 MSU system. MSU ratings are always rounded to whole numbers. IBM enforces an MSU rule called the “technology dividend”: each new mainframe model has a 10% lower MSU rating for the same level of system capacity. For example, when IBM introduced the System z9 -109 in 2005, if a particular z9 configuration could process the same number of transactions per second as its predecessor (a particular z990 configuration) then it would do so with 10% fewer MSUs. The lower MSU rating means lower software costs, providing an incentive for customers to upgrade. However, as software costs are not linear with MSUs, decreasing or increasing MSUs will not show a proportional change in software costs. The "least expensive" MSUs will be added (with increased MSUs) or removed (with decreased MSUs). For example, a 10% increase in MSUs will result in a software cost increase of less than 10%. How much more (or less, if reducing MSUs) depends on numerous other factors.
https://en.wikipedia.org/wiki/Million_service_units
Million years ago , abbreviated as Mya , Myr (megayear) or Ma (megaannum), is a unit of time equal to 1,000,000 years (i.e. 1 × 10 6 years), or approximately 31.6 teraseconds . Myr is in common use in fields such as Earth science and cosmology . Myr is also used with Mya or Ma. Together they make a reference system, one to a quantity, the other to a particular point in a year numbering system that is time before the present . Myr is deprecated in geology , but in astronomy Myr is standard. Where "myr" is seen in geology, it is usually "Myr" (a unit of mega-years). In astronomy, it is usually "Myr" (Million years). In geology, a debate remains open concerning the use of Myr (duration) plus Mya (million years ago) versus using only the term Ma . [ 1 ] [ 2 ] In either case, the term Ma is used in geology literature conforming to ISO 31-1 (now ISO 80000-3 ) and NIST 811 recommended practices. Traditional style geology literature is written: The Cretaceous started 145 Ma and ended 66 Ma, lasting for 79 Myr. The "ago" is implied, so that any such year number "X Ma" between 66 and 145 is "Cretaceous", for good reason. But the counter argument is that having myr for a duration and Mya for an age mixes unit systems, and tempts capitalization errors: "million" need not be capitalized, but "mega" must be; "ma" would technically imply a milliyear (a thousandth of a year, or 8 hours). On this side of the debate, one avoids myr and simply adds ago explicitly (or adds BP ), as in: The Cretaceous started 145 Ma ago and ended 66 Ma ago, lasting for 79 Ma. In this case, "79 Ma" means only a quantity of 79 million years, without the meaning of "79 million years ago".
https://en.wikipedia.org/wiki/Million_years_ago
One millionth is equal to 0.000 001, or 1 x 10 −6 in scientific notation . It is the reciprocal of a million , and can be also written as 1 ⁄ 1,000,000 . [ 1 ] Units using this fraction can be indicated using the prefix "micro-" from Greek , meaning "small". [ 2 ] Numbers of this quantity are expressed in terms of μ (the Greek letter mu ). [ 3 ] "Millionth" can also mean the ordinal number that comes after the nine hundred, ninety-nine thousand, nine hundred, ninety-ninth and before the million and first. [ 4 ] This article about a number is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Millionth
Millipede memory is a form of non-volatile computer memory . It promised a data density of more than 1 terabit per square inch (1 gigabit per square millimeter), which is about the limit of the perpendicular recording hard drives . Millipede storage technology was pursued as a potential replacement for magnetic recording in hard drives and a means of reducing the physical size of the technology to that of flash media. IBM demonstrated a prototype millipede storage device at CeBIT 2005, and was trying to make the technology commercially available by the end of 2007. However, because of concurrent advances in competing storage technologies, no commercial product has been made available since then. The main memory of modern computers is constructed from one of a number of DRAM -related devices. DRAM basically consists of a series of capacitors , which store data in terms of the presence or absence of electrical charge. Each capacitor and its associated control circuitry, referred to as a cell , holds one bit , and multiple bits can be read or written in large blocks at the same time. DRAM is volatile — data is lost when power is removed. In contrast, hard drives store data on a disk that is covered with a magnetic material ; data is represented by this material being locally magnetized. Reading and writing are accomplished by a single head, which waits for the requested memory location to pass under the head while the disk spins. As a result, a hard drive's performance is limited by the mechanical speed of the motor, and it is generally hundreds of thousands of times slower than DRAM. However, since the "cells" in a hard drive are much smaller, the storage density for hard drives is much higher than DRAM. Hard drives are non-volatile — data is retained even after power is removed. Millipede storage attempts to combine features of both. Like a hard drive, millipede both stores data in a medium and accesses the data by moving the medium under the head. Also similar to hard drives, millipede's physical medium stores a bit in a small area, leading to high storage densities. However, millipede uses many nanoscopic heads that can read and write in parallel, thereby increasing the amount of data read at a given time. Mechanically, millipede uses numerous atomic force probes , each of which is responsible for reading and writing a large number of bits associated with it. These bits are stored as a pit, or the absence of one, in the surface of a thermo-active polymer , which is deposited as a thin film on a carrier known as the sled. Any one probe can only read or write a fairly small area of the sled available to it, known as a storage field . Normally the sled is moved so that the selected bits are positioned under the probe using electromechanical actuators. These actuators are similar to those that position the read/write head in a typical hard drive, however, the actual distance moved is tiny in comparison. The sled is moved in a scanning pattern to bring the requested bits under the probe, a process known as x/y scan. The amount of memory serviced by any one field/probe pair is fairly small, but so is its physical size. Thus, many such field/probe pairs are used to make up a memory device, and data reads and writes can be spread across many fields in parallel, increasing the throughput and improving the access times. For instance, a single 32-bit value would normally be written as a set of single bits sent to 32 different fields. In the initial experimental devices, the probes were mounted in a 32x32 grid for a total of 1,024 probes. Given this layout looked like the legs on a millipede (animal), the name stuck. The design of the cantilever array involves making numerous mechanical cantilevers, on which a probe has to be mounted. All the cantilevers are made entirely out of silicon, using surface micromachining at the wafer surface. Regarding the creation of indentations, or pits, non- crosslinked polymers retain a low glass temperature , around 120 °C for PMMA [ 4 ] and if the probe tip is heated to above the glass temperature, it leaves a small indentation. Indentations are made at 3 nm lateral resolution. [ 5 ] By heating the probe immediately next to an indentation, the polymer will re-melt and fill in the indentation, erasing it (see also: thermo-mechanical scanning probe lithography ). After writing, the probe tip can be used to read the indentations. If each indentation is treated as one bit then a storage density of 0.9 Tb/in 2 could theoretically be achieved. [ 5 ] Each probe in the cantilever array stores and reads data thermo-mechanically, handling one bit at a time. To accomplish a read, the probe tip is heated to around 300 °C and moved in proximity to the data sled. If the probe is located over a pit the cantilever will push it into the hole, increasing the surface area in contact with the sled, and in turn increasing the cooling as heat leaks into the sled from the probe. In the case where there is no pit at that location, only the very tip of the probe remains in contact with the sled, and the heat leaks away more slowly. The electrical resistance of the probe is a function of its temperature, and it rises with an increase in temperature. Thus when the probe drops into a pit and cools, this registers as a drop in resistance. A low resistance will be translated to a "1" bit, or a "0" bit otherwise. While reading an entire storage field, the tip is dragged over the entire surface and the resistance changes are constantly monitored. To write a bit, the tip of the probe is heated to a temperature above the glass transition temperature of the polymer used to manufacture the data sled, which is generally made of acrylic glass . In this case the transition temperature is around 400 °C. To write a "1", the polymer in proximity to the tip is softened, and then the tip is gently touched to it, causing a dent. To erase the bit and return it to the zero state, the tip is instead pulled up from the surface, allowing surface tension to pull the surface flat again. Older experimental systems used a variety of erasure techniques that were generally more time consuming and less successful. These older systems offered around 100,000 erases, but the available references do not contain enough information to say if this has been improved with the newer techniques. [ citation needed ] As one might expect, the need to heat the probes requires a fairly large amount of power for general operation. However, the exact amount is dependent on the speed that data is being accessed; at slower rates the cooling during read is smaller, as is the number of times the probe has to be heated to a higher temperature to write. When operated at data rates of a few megabits per second, Millipede is expected to consume about 100 milliwatts, which is in the range of flash memory technology and considerably below hard drives. However, one of the main advantages of the Millipede design is that it is highly parallel, allowing it to run at much higher speeds into the GB /s. At these sorts of speeds one might expect power requirements more closely matching current hard drives, and indeed, data transfer speed is limited to the kilobits-per-second range for an individual probe, which amounts to a few megabits for an entire array. Experiments done at IBM's Almaden Research Center showed that individual tips could support data rates as high as 1 - 2 megabits per second, potentially offering aggregate speeds in the GB/s range. Millipede memory was proposed as a form of non-volatile computer memory that was intended to compete with flash memory in terms of data storage, reading and writing speed, and physical size of the technology. However, other technologies have since surpassed it, and thus it does not appear to be a technology currently being pursued. The earliest generation millipede devices used probes 10 nanometers in diameter and 70 nanometers in length, producing pits about 40 nm in diameter on fields 92 μm x 92 μm. Arranged in a 32 x 32 grid, the resulting 3 mm x 3 mm chip stores 500 megabits of data or 62.5 MB, resulting in an areal density , the number of bits per square inch, on the order of 200 Gbit/in². IBM initially demonstrated this device in 2003, planning to introduce it commercially in 2005. By that point hard drives were approaching 150 Gbit/in², and have since surpassed it. Devices demonstrated at the CeBIT Expo in 2005 improved on the basic design, using a 64 x 64 cantilever chips with a 7 mm x 7 mm data sled, boosting the data storage capacity to 800 Gbit/in² using smaller pits. It appears the pit size can scale to about 10 nm, resulting in a theoretical areal density just over 1Tbit/in². IBM planned to introduce devices based on this sort of density in 2007. For comparison, as of late 2011, laptop hard drives were shipping with a density of 636 Gbit/in², [ 6 ] and it is expected that heat-assisted magnetic recording and patterned media together could support densities of 10 Tbit/in². [ 7 ] Flash reached almost 250 Gbit/in² in early 2010. [ 8 ] As of 2015, [ citation needed ] because of concurrent advances in competing storage technologies, no commercial product has been made available so far.
https://en.wikipedia.org/wiki/Millipede_memory
A Millisecond Furnace is a device used for cracking naphtha into ethylene, [ 1 ] by extremely short (50 to 100 millisecond) exposure to temperatures of about 900 degrees Celsius, followed by a rapid quenching below 750 degrees Celsius. It was developed by M. W. Kellogg and Idemitsu in the 1970s. [ 2 ] This technology-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Millisecond_furnace
A Milliwatt test ( Milliwatt line ) is a test method or test facility used in telecommunications to measure line quality and transmission loss between stations or points in an analog telephone system. The test consists of transmitting an analog sinusoidal signal at the frequency of 1004 Hz with the power level of 0 (zero) dBm . By definition, this is the equivalent of a continuous power dissipation of 1 mW ( milliWatt ), i.e., the power consumed if a voltage of 0.775 V ( RMS ) is applied to a telephone line with 600 Ohm nominal impedance. [ 1 ] In the Bell System , central offices provided this type of service on a dedicated telephone number (102 type Milliwatt line) for remote subscriber line testing. In conjunction, a second line (type 100 line) provided quiet termination. Various types of test lines were known as "100", "102", "104" etc., because these numbers accessed the test line in tandem offices in lieu of an area code. [ citation needed ] In digital central office installations, the Milliwatt test facility was implemented using a synthesized version of the 1004 Hz signal, often known as the digital milliwatt . +1 (503) 697-1000 is an example of such a Milliwatt from a digital central office.
https://en.wikipedia.org/wiki/Milliwatt_test
In electrical engineering , Millman's theorem [ 1 ] (or the parallel generator theorem ) is a method to simplify the solution of a circuit . Specifically, Millman's theorem is used to compute the voltage at the ends of a circuit made up of only branches in parallel . It is named after Jacob Millman , who proved the theorem. Let e k {\displaystyle e_{k}} be the generators' voltages. Let R k {\displaystyle R_{k}} be the resistances on the branches with voltage generators e k {\displaystyle e_{k}} . Then Millman states that the voltage at the ends of the circuit is given by: [ 2 ] That is, the sum of the short circuit currents in branch divided by the sum of the conductances in each branch. It can be proved by considering the circuit as a single supernode . [ 3 ] Then, according to Ohm and Kirchhoff , the voltage between the ends of the circuit is equal to the total current entering the supernode divided by the total equivalent conductance of the supernode. The total current is the sum of the currents in each branch. The total equivalent conductance of the supernode is the sum of the conductance of each branch, since all the branches are in parallel. [ 4 ] One method of deriving Millman's theorem starts by converting all the branches to current sources (which can be done using Norton's theorem ). A branch that is already a current source is simply not converted. In the expression above, this is equivalent to replacing the e k / R k {\displaystyle e_{k}/R_{k}} term in the numerator of the expression above with the current of the current generator, where the kth branch is the branch with the current generator. The parallel conductance of the current source is added to the denominator as for the series conductance of the voltage sources. An ideal current source has zero conductance (infinite resistance) and so adds nothing to the denominator. [ 5 ] If one of the branches is an ideal voltage source, Millman's theorem cannot be used, but in this case the solution is trivial, the voltage at the output is forced to the voltage of the ideal voltage source. The theorem does not work with ideal voltage sources because such sources have zero resistance (infinite conductance) so the summation of both the numerator and denominator are infinite and the result is indeterminate. [ 6 ]
https://en.wikipedia.org/wiki/Millman's_theorem
In number theory , Mills' constant is defined as the smallest positive real number A such that the floor function of the double exponential function is a prime number for all positive natural numbers n . This constant is named after William Harold Mills who proved in 1947 the existence of A based on results of Guido Hoheisel and Albert Ingham on the prime gaps . [ 1 ] Its value is unproven, but if the Riemann hypothesis is true, it is approximately 1.3063778838630806904686144926... (sequence A051021 in the OEIS ). The primes generated by Mills' constant are known as Mills primes; if the Riemann hypothesis is true, the sequence begins If a i denotes the i th prime in this sequence, then a i can be calculated as the smallest prime number larger than a i − 1 3 {\displaystyle a_{i-1}^{3}} . In order to ensure that rounding A 3 n {\displaystyle A^{3^{n}}} , for n = 1, 2, 3, ..., produces this sequence of primes, it must be the case that a i < ( a i − 1 + 1 ) 3 {\displaystyle a_{i}<(a_{i-1}+1)^{3}} . The Hoheisel–Ingham results guarantee that there exists a prime between any two sufficiently large cube numbers , which is sufficient to prove this inequality if we start from a sufficiently large first prime a 1 {\displaystyle a_{1}} . The Riemann hypothesis implies that there exists a prime between any two consecutive cubes, allowing the sufficiently large condition to be removed, and allowing the sequence of Mills primes to begin at a 1 = 2. For all a > e e 32.537 {\displaystyle e^{e^{32.537}}} , there is at least one prime between a 3 {\displaystyle a^{3}} and ( a + 1 ) 3 {\displaystyle (a+1)^{3}} . [ 2 ] This upper bound is much too large to be practical, as it is infeasible to check every number below that figure. However, the value of Mills' constant can be verified by calculating the first prime in the sequence that is greater than that figure. As of April 2017, the 11th number in the sequence is the largest one that has been proved prime . It is and has 20562 digits. [ 3 ] As of 2024 [update] , the largest known Mills probable prime (under the Riemann hypothesis) is (sequence A108739 in the OEIS ), which is 1,665,461 digits long. By calculating the sequence of Mills primes, one can approximate Mills' constant as Caldwell and Cheng used this method to compute 6850 base 10 digits of Mills' constant under the assumption that the Riemann hypothesis is true. [ 4 ] There is no closed-form formula known for Mills' constant, and it is not even known whether this number is rational . [ 5 ] There is nothing special about the middle exponent value of 3. It is possible to produce similar prime-generating functions for different middle exponent values. In fact, for any real number above 2.106..., it is possible to find a different constant A that will work with this middle exponent to always produce primes. Moreover, if Legendre's conjecture is true, the middle exponent can be replaced [ 6 ] with value 2 (sequence A059784 in the OEIS ). Matomäki showed unconditionally (without assuming Legendre's conjecture) the existence of a (possibly large) constant A such that ⌊ A 2 n ⌋ {\displaystyle \lfloor A^{2^{n}}\rfloor } is prime for all n . [ 7 ] Additionally, Tóth proved that the floor function in the formula could be replaced with the ceiling function , so that there exists a constant B {\displaystyle B} such that is also prime-representing for r > 2.106 … {\displaystyle r>2.106\ldots } . [ 8 ] In the case r = 3 {\displaystyle r=3} , the value of the constant B {\displaystyle B} begins with 1.24055470525201424067... The first few primes generated are: Without assuming the Riemann hypothesis, Elsholtz proved that ⌊ A 10 10 n ⌋ {\displaystyle \lfloor A^{10^{10n}}\rfloor } is prime for all positive integers n , where A ≈ 1.00536773279814724017 {\displaystyle A\approx 1.00536773279814724017} , and that ⌊ B 3 13 n ⌋ {\displaystyle \lfloor B^{3^{13n}}\rfloor } is prime for all positive integers n , where B ≈ 3.8249998073439146171615551375 {\displaystyle B\approx 3.8249998073439146171615551375} . [ 9 ]
https://en.wikipedia.org/wiki/Mills'_constant
In fluid dynamics the Milne-Thomson circle theorem or the circle theorem is a statement giving a new stream function for a fluid flow when a cylinder is placed into that flow. [ 1 ] [ 2 ] It was named after the English mathematician L. M. Milne-Thomson . Let f ( z ) {\displaystyle f(z)} be the complex potential for a fluid flow, where all singularities of f ( z ) {\displaystyle f(z)} lie in | z | > a {\displaystyle |z|>a} . If a circle | z | = a {\displaystyle |z|=a} is placed into that flow, the complex potential for the new flow is given by [ 3 ] with same singularities as f ( z ) {\displaystyle f(z)} in | z | > a {\displaystyle |z|>a} and | z | = a {\displaystyle |z|=a} is a streamline. On the circle | z | = a {\displaystyle |z|=a} , z z ¯ = a 2 {\displaystyle z{\bar {z}}=a^{2}} , therefore Consider a uniform irrotational flow f ( z ) = U z {\displaystyle f(z)=Uz} with velocity U {\displaystyle U} flowing in the positive x {\displaystyle x} direction and place an infinitely long cylinder of radius a {\displaystyle a} in the flow with the center of the cylinder at the origin. Then f ( a 2 z ¯ ) = U a 2 z ¯ , ⇒ f ( a 2 z ¯ ) ¯ = U a 2 z {\displaystyle f\left({\frac {a^{2}}{\bar {z}}}\right)={\frac {Ua^{2}}{\bar {z}}},\ \Rightarrow \ {\overline {f\left({\frac {a^{2}}{\bar {z}}}\right)}}={\frac {Ua^{2}}{z}}} , hence using circle theorem, represents the complex potential of uniform flow over a cylinder.
https://en.wikipedia.org/wiki/Milne-Thomson_circle_theorem
The Milne model was a special-relativistic cosmological model of the universe proposed by Edward Arthur Milne in 1935. [ 1 ] It is mathematically equivalent to a special case of the FLRW model in the limit of zero energy density and it obeys the cosmological principle [ citation needed ] . The Milne model is also similar to Rindler space in that both are simple re- parameterizations of flat Minkowski space . Since it features both zero energy density and maximally negative spatial curvature , the Milne model is inconsistent with cosmological observations [ citation needed ] . Cosmologists actually observe the universe's density parameter to be consistent with unity and its curvature to be consistent with flatness . [ 2 ] The Milne universe is a special case of a more general Friedmann–Lemaître–Robertson–Walker model (FLRW). The Milne solution can be obtained from the more generic FLRW model by demanding that the energy density, pressure and cosmological constant all equal zero and the spatial curvature is negative. [ citation needed ] From these assumptions and the Friedmann equations it follows that the scale factor must depend on time coordinate linearly. [ 3 ] [ 4 ] Setting the spatial curvature and speed of light to unity the metric for a Milne universe can be expressed with hyperspherical coordinates as: [ 4 ] [ 5 ] where is the metric for a two-sphere and is the curvature -corrected radial component for negatively curved space that varies between 0 and + ∞ {\displaystyle +\infty } . The empty space that the Milne model describes [ citation needed ] can be identified with the inside of a light cone of an event in Minkowski space by a change of coordinates. [ 4 ] Milne developed this model independent of general relativity but with awareness of special relativity . As he initially described it, the model has no expansion of space, so all of the redshift (except that caused by peculiar velocities ) is explained by a recessional velocity associated with the hypothetical "explosion". However, the mathematical equivalence of the zero energy density ( ρ = 0 {\displaystyle \rho =0} ) version of the FLRW metric to Milne's model implies that a full general relativistic treatment using Milne's assumptions would result in a linearly increasing scale factor for all time since the deceleration parameter is uniquely zero for such a model. Milne proposed that the universe's density changes in time because of an initial outward explosion of matter. Milne's model assumes an inhomogeneous density function which is Lorentz Invariant (around the event t=x=y=z=0). When rendered graphically Milne's density distribution shows a three-dimensional spherical Lobachevskian pattern with outer edges moving outward at the speed of light. Every inertial body perceives itself to be at the center of the explosion of matter (see observable universe ), and sees the local universe as homogeneous and isotropic in the sense of the cosmological principle . In order to be consistent with general relativity , the universe's density must be negligible in comparison to the critical density at all times for which the Milne model is taken to apply.
https://en.wikipedia.org/wiki/Milne_model
Milnesium alpigenum is a species of tardigrade that falls under the Tardigrada phylum . Like its taxonomic relatives it is an omnivorous predator that feeds on other small organisms, such as algae , rotifers , and nematodes . [ 1 ] M. alpigenum was discovered by Christian Gottfried Ehrenberg in 1853. [ 2 ] It is very closely related to Milnesium tardigradum along with many other species from the Milnesium genus. M. alpigenum was first suggested to be an independent species by Christian Gottfried Ehrenberg in 1853. [ 2 ] However immediately it was turned down to be a valid species due to its extreme morphological similarity to Milnesium tardigradum ( Doyère , 1840). This was also because Intraspecific phenotypic variation was thought to be very large between species of tardigrade, and that each species was vastly physiologically different from one another. However, in the early 20th century more research was conducted in relation to the morphological differences within the Milnesium and other tardigrade genera. It was discovered that any differences between species was very subtle and that all tardigrade were particularly sensitive to reproductive isolating mechanisms. [ 2 ] For almost a century (1853 - 1928) M. alpigenum remained Invalid. [ 2 ] In 1928 zoologist Ernst Marcus conducted an experiment, synonymizing M. alpigenum along with Milnesium quadrifidum against M. tardigradum . Marcus concluded small morphological differences in claw configuration between the three species , along with statistical morphometry and DNA sequencing differences. [ 2 ] These discoveries cemented M. alpigenum as a valid species and its taxonomic status was confirmed. Due to the sympatric nature of the speciation within most of the Milnesium Genus , the pre-zygotic isolating factors between M. alpigenum and M. tardigradum are currently unknown. Thus it is predicted that these species do breed, but are unable to produce viable offspring due to post-zygotic factors. [ 1 ] Phylogenetically M. alpigenum branches away from the group of sub species M. inceptum (a close relative). Following that it also branches off its closest relative Milnesium tardigradum . Far prior to this it branches of from the other Families of Tardigrade like Diploechiniscus or Echiniscus . M. alpigenum has a symmetrical roughly rounded body shape with eight legs. Its method of locomotion is to use its six front legs to propel itself through water and to occasionally use its claws to grip onto substrates . Its hind legs often act as a means to push itself off substrates . However, often the tardigrade will simply drift. Individuals have very varied sizes, but some have been measured up to 0.7mm in length. [ 1 ] Tardigrades possess extreme resilience to all sorts of negative Environmental factors such as: extreme radiation levels, extreme temperatures (both high and low), extreme pressures (both high and low), extreme levels of toxins , and extended periods without food or water (up to 10 years). [ 3 ] They manage to counteract these extreme environmental stresses by going into a dormant state called cryptobiosis where their metabolism decreases to approximately 0.01% of its regular levels. [ 4 ] There are a very limited amount of unique possible morphological traits within the Milnesium genus with all other traits being common with the greater phylum (See Tardigrade Morphology ). The Milnesium specific traits that M. alpigenum possess are as follows: a (3-3)(3-3) claw configuration, the absence of any claw configuration changes as the individual moves to adulthood, a (4+2) peribuccal lamellae body shape, the absence of dorsal cuticle sculpturing, the absence of pseudoplates, a parthenogenesis reproductive mode and (Although not directly morphological ) a Palaearctic zoogeographic origin. [ 5 ] All of these traits including zoogeographic origin are shared by Milnesium tardigradum except M. Tardigradum has a (2-3)(3-2) claw configuration and the claw configuration changes as M. tardigradum moves to adulthood. [ 5 ] Milnesium alpigenum are found in the Palaearctic realm (Upper Eurasia ). They are found in the same ecological area as Milnesium tardigradum and most other Tardigrada species which is aquatic environments such as marine , coastal and terrestrial areas. [ 4 ] In fact, tardigrades are so resilient, populous and varied that an individual will likely consume tens of different species of tardigrade possibly including M. alpigenum in a bottle of spring water. [ 6 ] Similarly to almost all Tardigrades , Milnesium alpigenum reproduces both sexually and asexually via parthenogenesis , they do this for reasons similar to those of other asexual organisms like Aphids or Sea Stars . They reproduce asexually to take advantage of resource lucrative environments, as well as to take advantage of limited courting opportunities. Whereas they also (like many organisms that asexually reproduce) reproduce sexually in addition. This allows for M. alpigenum to conquer unreliable/unfamiliar environments by increasing genetic diversity giving higher chances for advantageous traits and thus inter-generational survival. [ 7 ] During sexual reproduction the female lays at most 12 eggs as it sheds it skin, the eggs are then left in the cuticle . Following this, the eggs are then fertilized externally in the cuticle . These take around five to sixteen days to hatch. The hatched larvae undergo various moulting stages that allow them to incrementally reach adulthood . The length of these moulting stages is dependent on the individual's nutrition levels. Finally, once the larvae finish these stages it conducts its final growth moulting phase called ecdysis . After this, the individual has reached reproductive maturity . All tardigrades including M. alpigenum implement the "R" reproductive strategy of having many offspring with little to no investment in growth. The reproductive cycle and nature of M. alpigenum is almost identical to the likes of Milnesium tardigradum . [ 1 ] Where M. alpigenum taxonomically stands was a complex problem that took decades to discover. However where Tardigrades in general stand on the wider tree of life is in itself a mystery. Due to the limited amount of fossil evidence tied to historic specimens of Tardigrades, it is difficult to determine exactly where Tardigrades evolutionarily branch off. Tardigrades have been phylogenetically linked to arthropods and likely has a similar evolutionary history. However, the extent of the relationship is still debated. [ 4 ] Other research has shown a shortage in a subset of genes also found in nematodes , another member of the Ecdysozoa superphylum . [ 8 ]
https://en.wikipedia.org/wiki/Milnesium_alpigenum
Milnesium tardigradum is a cosmopolitan species of tardigrade that can be found in a diverse range of environments. [ 1 ] It has also been found in the sea around Antarctica . [ 2 ] M. tardigradum was described by Louis Michel François Doyère in 1840. [ 3 ] [ 4 ] It contains unidentified osmolytes that could potentially provide important information in the process of cryptobiosis . [ 5 ] M. tardigradum has a symmetrical body with eight legs; it uses claws—a distinctive feature of this tardigrade species. The total length of the body varies, with some measuring up to 0.7 mm in length. [ 6 ] M. tardigradum have been found to possess a high level of radioresistance . [ 7 ] In 2007, individuals of two tardigrade species, Richtersius coronifer and M. tardigradum , were subject to the radiation , near- vacuum , and near- absolute zero conditions of outer space as part of the European Space Agency 's Biopan-6 experiment. Three specimens of M. tardigradum survived. [ 8 ] The M. tardigradum can cope with high amounts of environmental stress by initiating cryptobiosis. During this state, the internal organic clock of M. tardigradum halts, thus the cryptobiotic state does not contribute to the aging process . [ 9 ] M. tardigradum is an omnivorous predator. It typically feeds on other small organisms, such as algae , rotifers , and nematodes . There have also been recorded cases of M. tardigradum feeding on other smaller tardigrades. [ 6 ] M. tardigradum has been phylogenetically linked to arthropods . Although the extent of the relationship is still debated, evidence suggests that tardigrades and arthropods have a close evolutionary history. [ 9 ] Recent research has shown a shortage in a particular subset of genes also found in nematodes, another member of the Ecdysozoa superphylum. [ 10 ] The biogeographical distribution of M. tardigradum is large. The species occupies mostly aquatic environments such as marine, coastal, and terrestrial areas. The full distribution of M. tardigradum is difficult to analyze due to the difficulty in taxonomy and the lack of sufficient data. [ 9 ] M. tardigradum reproduces both sexually and through parthenogenesis. The mating behavior of tardigrades is difficult to reproduce under artificial conditions; hence the frequency and time of reproduction is not fully understood. If and when a mating season exists for M. tardigradum is unknown. [ 6 ] Females lay up to 12 eggs, which hatch after several days (around five to sixteen). The development of newly hatched larvae is marked by various molting stages, rather than metamorphosis . The time frame of these molting stages vary from each tardigrade as it is dependent on the nutrition of the specific individual. [ 6 ] Once the molting stages are complete, the larva tardigrade attempts to find an ideal location to initiate ecdysis . Some eggs may be left in the discarded exuvia . [ 11 ] Tardigrades have been shown to respond to different temperature changes at different developmental stages. Specifically, the younger the egg, the less likely it is to survive extreme environments. However, not too long after development, tardigrades demonstrate a remarkable ability to withstand these conditions. To survive such conditions, tardigrades need time to develop important cellular structures and repair mechanisms. [ 12 ] M. tardigradum was voted the winner of The Guardian 's "2025 invertebrate of the year" competition, from a shortlist of ten. The article describing the conclusion of the contest stated that the species had "endured all five previous planetary extinction events ". [ 13 ]
https://en.wikipedia.org/wiki/Milnesium_tardigradum
In mathematics , specifically differential and algebraic topology , during the mid 1950's John Milnor [ 1 ] pg 14 was trying to understand the structure of ( n − 1 ) {\displaystyle (n-1)} -connected manifolds of dimension 2 n {\displaystyle 2n} (since n {\displaystyle n} -connected 2 n {\displaystyle 2n} -manifolds are homeomorphic to spheres, this is the first non-trivial case after) and found an example of a space which is homotopy equivalent to a sphere, but was not explicitly diffeomorphic. He did this through looking at real vector bundles V → S n {\displaystyle V\to S^{n}} over a sphere and studied the properties of the associated disk bundle. It turns out, the boundary of this bundle is homotopically equivalent to a sphere S 2 n − 1 {\displaystyle S^{2n-1}} , but in certain cases it is not diffeomorphic. This lack of diffeomorphism comes from studying a hypothetical cobordism between this boundary and a sphere, and showing this hypothetical cobordism invalidates certain properties of the Hirzebruch signature theorem .
https://en.wikipedia.org/wiki/Milnor's_sphere
In mathematics , the Milnor conjecture was a proposal by John Milnor ( 1970 ) of a description of the Milnor K-theory (mod 2) of a general field F with characteristic different from 2, by means of the Galois (or equivalently étale ) cohomology of F with coefficients in Z /2 Z . It was proved by Vladimir Voevodsky ( 1996 , 2003a , 2003b ). Let F be a field of characteristic different from 2. Then there is an isomorphism for all n ≥ 0, where K M denotes the Milnor ring . The proof of this theorem by Vladimir Voevodsky uses several ideas developed by Voevodsky, Alexander Merkurjev , Andrei Suslin , Markus Rost , Fabien Morel , Eric Friedlander , and others, including the newly minted theory of motivic cohomology (a kind of substitute for singular cohomology for algebraic varieties ) and the motivic Steenrod algebra . The analogue of this result for primes other than 2 was known as the Bloch–Kato conjecture . Work of Voevodsky and Markus Rost yielded a complete proof of this conjecture in 2009; the result is now called the norm residue isomorphism theorem .
https://en.wikipedia.org/wiki/Milnor_conjecture_(K-theory)
In knot theory , the Milnor conjecture says that the slice genus of the ( p , q ) {\displaystyle (p,q)} torus knot is It is in a similar vein to the Thom conjecture . It was first proved by gauge theoretic methods by Peter Kronheimer and Tomasz Mrowka . [ 1 ] Jacob Rasmussen later gave a purely combinatorial proof using Khovanov homology , by means of the s-invariant . [ 2 ] This knot theory-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Milnor_conjecture_(knot_theory)
Milos Vratislav Novotny (born 19 April 1942) [ 1 ] is an American chemist , currently the Distinguished Professor Emeritus and Director of the Novotny Glycoscience Laboratory and the Institute for Pheromone Research at Indiana University , [ 2 ] and also a published author. [ 3 ] [ 4 ] Milos Novotny received his Bachelor of Science from the University of Brno, Czechoslovakia in 1962. In 1965, Novotny received his Ph.D. at the University of Brno. Novotny also holds honorary doctorates from Uppsala University , Masaryk University and Charles University , and he has been a major figure in analytical separation methods. [ 2 ] Novotny was recognized for the development of PAGE Polyacrylamide Gel-filled Capillaries for Capillary Electrophoresis in 1993. [ 5 ] In his years of work dedicated to analytical chemistry he has earned a reputation for being especially innovative in the field and has contributed a great deal to several analytical separation methods. Most notably, Milos has worked a great deal with microcolumn separation techniques of liquid chromatography, supercritical fluid chromatography, and capillary electrophoresis. Additionally, he is highly acclaimed for his research in proteomics and glycoanalysis and for identifying the first mammalian pheromones. [ 6 ] In 1986, Novotny was given the Award in Chromatography from the American Chemical Society. [ 7 ] Novotny received the ANACHEM award in 1992. [ 8 ] This award is given to outstanding analytical chemists for teaching, research, administration or other activities which have advanced of the field. Novotny was also selected as the LCGC Lifetime Achievement award recipient in 2019. [ 9 ] Chairman, Gordon Research Conference on Analytical Chemistry; James B. Himes Merit Award of the Chicago Chromatography Discussion Group; M.S. Tswett Award and Medal in Chromatography; American Chemical Society Award in Chromatography; ISCO Award in Biochemical Instrumentation; Eastern Analytical Symposium Award in Chromatography; Chemical Instrumentation Award of the American Chemical Society; Distinguished Faculty Research Lecture, Indiana University. [ 10 ] Keene P. Dimick Award in Chromatography, Third International Symposium on Supercritical Fluid Chromatography Award for Pioneering Work in the Development of SFC; Marcel J.E. Golay Award and Medal, International Symposium on Capillary Chromatography; American Chemical Society Award in Separation Science and Technology; American Chemical Society Exceptional Achievement Award as a Capillary Gas Chromatography Short Course Instructor; R&D 100 Award for technologically significant new product: -PAGE Polyacrylamide Gel-filled Capillaries for Capillary Electrophoresis”; Jan E. Purkynje Memorial Medal of the Czech Academy of Sciences; R&D Magazine Scientist of the Year Award; M.S. Tswett Memorial Medal of the Russian Academy of Sciences; A.J.P. Martin Gold Medal of the Chromatographic Society of Great Britain; Theophilus Redwood Award, The Royal Society of Chemistry, Great Britain; Distinguished Teaching and Mentoring Award of the University Graduate School, Indiana University; Elected as a Foreign Member of the Royal Society of Sciences (Sweden); College of Arts & Sciences Distinguished Faculty Award, Indiana University. [ 10 ] COLACRO (Congreso Latinoamericano de Cromatografia) Merit Medal; Pittsburgh Analytical Chemistry Award; Eastern Analytical Symposium Award for Outstanding Achievements in the Fields of Analytical Chemistry; Tracy M. Sonneborn Award for Outstanding Research and Teaching, Indiana University; Dal Nogare Award in Chromatography; CaSSS (California Separation Science Society) Award for Excellence in Separation Science; Honorary Member of the Slovak Pharmaceutical Society; Foreign Member of the Learned Society of the Czech Republic (Czech Academy of Sciences); American Chemical Society Award in Analytical Chemistry; Jan Weber Prize and Medal, Slovak Pharmaceutical Society, Slovakia; Ralph N. Adams Award in Bioanalytical Chemistry. [ 10 ] Honorary Membership of the Czech Society for Mass Spectrometry; Lifetime Achievement Award in Chromatography by the LC-GC Magazine, Europe; Giorgio Nota Award, Italian Chemical Society; Heyrovsky Medal in Chemical Sciences, Prague, Czech Republic. [ 10 ] On the faculty of Indiana University, Bloomington, since 1971. 1978 – Professor of Chemistry. 1980 – Visiting Scientist, Department of Immunogenetics, Max Planck Institute for Biology, Tübingen, Germany. 1988 – James H. Rudy Professor of Chemistry. 1999 – Distinguished Professor of Chemistry. 1999 – Director of the Institute for Pheromone Research. 2000–2015 – Lilly Chemistry Alumni Chair. 2004 – Adjunct Professor of Medicine, Indiana University School of Medicine. 2004–2009 – Director of the National Center for Glycomics and Glycoproteomics. 2010 – Director of the Novotny Glycoscience Laboratory. 2011 – Distinguished Professor Emeritus of Chemistry. [ 10 ] . [ 10 ]
https://en.wikipedia.org/wiki/Milos_Novotny
The Milthorpe Lecture is a series of public lectures on environmental science held at Macquarie University , Australia . It is endowed by the Milthorpe Fund in memory of F.L. Milthorpe , Chair of Biology at the University from 1967–1982. The first lecture was delivered by David Suzuki in 1989. [ 1 ]
https://en.wikipedia.org/wiki/Milthorpe_Lecture
Milü ( Chinese : 密率 ; pinyin : mìlǜ ; lit. 'close ratio'), also known as Zulü (Zu's ratio), is the name given to an approximation of π ( pi ) found by the Chinese mathematician and astronomer Zu Chongzhi during the 5th century. Using Liu Hui's algorithm , which is based on the areas of regular polygons approximating a circle, Zu computed π as being between 3.1415926 and 3.1415927 [ a ] and gave two rational approximations of π , ⁠ 22 / 7 ⁠ and ⁠ 355 / 113 ⁠ , which were named yuelü ( 约率 ; yuēlǜ ; 'approximate ratio') and milü respectively. [ 1 ] ⁠ 355 / 113 ⁠ is the best rational approximation of π with a denominator of four digits or fewer, being accurate to six decimal places. It is within 0.000 009 % of the value of π , or in terms of common fractions overestimates π by less than ⁠ 1 / 3 748 629 ⁠ . The next rational number (ordered by size of denominator) that is a better rational approximation of π is ⁠ 52 163 / 16 604 ⁠ , though it is still only correct to six decimal places. To be accurate to seven decimal places, one needs to go as far as ⁠ 86 953 / 27 678 ⁠ . For eight, ⁠ 102 928 / 32 763 ⁠ is needed. [ 2 ] The accuracy of milü to the true value of π can be explained using the continued fraction expansion of π , the first few terms of which are [3; 7, 15, 1, 292, 1, 1, ...] . A property of continued fractions is that truncating the expansion of a given number at any point will give the best rational approximation of the number. To obtain milü , truncate the continued fraction expansion of π immediately before the term 292; that is, π is approximated by the finite continued fraction [3; 7, 15, 1] , which is equivalent to milü . Since 292 is an unusually large term in a continued fraction expansion (corresponding to the next truncation introducing only a very small term, ⁠ 1 / 292 ⁠ , to the overall fraction), this convergent will be especially close to the true value of π : [ 3 ] Zu's contemporary calendarist and mathematician He Chengtian invented a fraction interpolation method called 'harmonization of the divisor of the day' ( 调日法 ; diaorifa ) to increase the accuracy of approximations of π by iteratively adding the numerators and denominators of fractions. Zu's approximation of π ≈ ⁠ 355 / 113 ⁠ can be obtained with He Chengtian's method. [ 1 ]
https://en.wikipedia.org/wiki/Milü
Mimikatz is both an exploit on Microsoft Windows that extracts passwords stored in memory and software that performs that exploit. [ 1 ] It was created by French programmer Benjamin Delpy and is French slang for "cute cats". [ 1 ] Benjamin Delpy discovered a flaw in Microsoft Windows that holds both an encrypted copy of a password and a key that can be used to decipher it in memory at the same time. [ 1 ] He contacted Microsoft in 2011 to point out the flaw, but Microsoft replied that it would require the machine to be already compromised. [ 1 ] Delpy realised that the flaw could be used to gain access to non-compromised machines on a network from a compromised machine. [ 1 ] He released the first version of the software in May 2011 as closed source software. [ 1 ] In September 2011, the exploit was used in the DigiNotar hack. [ 1 ] Delpy spoke about the software at a conference in 2012. [ 1 ] Once during the conference, he returned to his room to find a stranger sitting at his laptop. [ 1 ] The stranger apologised, saying he was in the wrong room and left. [ 1 ] A second man approached him during the conference and demanded he give him copies of his presentation and software on a USB flash drive . [ 1 ] Delpy gave him copies. [ 1 ] Delpy felt shaken by his experiences and before he left Russia, he released the source code on GitHub . [ 1 ] He felt that those defending against cyberattacks should learn from the code in order to defend against the attack. [ 1 ] In 2013 Microsoft added a feature to Windows 8.1 that would allow turning off the feature that could be exploited. [ 1 ] In Windows 10 the feature is turned off by default, but Jake Williams from Rendition Infosec says that it remains effective, either because the system runs an outdated version of Windows, or he can use privilege escalation to gain enough control over the target to turn on the exploitable feature. [ 1 ] Benjamin Delpy has updated the software to cover further exploits than the original. [ 2 ] The Carbanak attack and the cyberattack on the Bundestag used the exploit. [ 1 ] The NotPetya and BadRabbit malware used versions of the attack combined with EternalBlue and EternalRomance exploits. [ 1 ] In Mr. Robot episode 9 of season 2, Angela Moss uses mimikatz to get her boss's Windows domain password. [ 3 ]
https://en.wikipedia.org/wiki/Mimikatz
MimoDB is a database of peptides that have been selected from random peptide libraries based on their ability to bind small compounds, nucleic acids , proteins , cells , and tissues through phage display . [ 1 ] This Biological database -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/MimoDB
In linear algebra and functional analysis , the min-max theorem , or variational theorem , or Courant–Fischer–Weyl min-max principle , is a result that gives a variational characterization of eigenvalues of compact Hermitian operators on Hilbert spaces . It can be viewed as the starting point of many results of similar nature. This article first discusses the finite-dimensional case and its applications before considering compact operators on infinite-dimensional Hilbert spaces. We will see that for compact operators, the proof of the main theorem uses essentially the same idea from the finite-dimensional argument. In the case that the operator is non-Hermitian, the theorem provides an equivalent characterization of the associated singular values . The min-max theorem can be extended to self-adjoint operators that are bounded below. Let A be a n × n Hermitian matrix . As with many other variational results on eigenvalues, one considers the Rayleigh–Ritz quotient R A : C n \ {0} → R defined by where (⋅, ⋅) denotes the Euclidean inner product on C n . Equivalently, the Rayleigh–Ritz quotient can be replaced by The Rayleigh quotient of an eigenvector v {\displaystyle v} is its associated eigenvalue λ {\displaystyle \lambda } because R A ( v ) = ( λ x , x ) / ( x , x ) = λ {\displaystyle R_{A}(v)=(\lambda x,x)/(x,x)=\lambda } . For a Hermitian matrix A , the range of the continuous functions R A ( x ) and f ( x ) is a compact interval [ a , b ] of the real line. The maximum b and the minimum a are the largest and smallest eigenvalue of A , respectively. The min-max theorem is a refinement of this fact. Let A {\textstyle A} be Hermitian on an inner product space V {\textstyle V} with dimension n {\textstyle n} , with spectrum ordered in descending order λ 1 ≥ . . . ≥ λ n {\textstyle \lambda _{1}\geq ...\geq \lambda _{n}} . Let v 1 , . . . , v n {\textstyle v_{1},...,v_{n}} be the corresponding unit-length orthogonal eigenvectors. Reverse the spectrum ordering, so that ξ 1 = λ n , . . . , ξ n = λ 1 {\textstyle \xi _{1}=\lambda _{n},...,\xi _{n}=\lambda _{1}} . (Poincaré’s inequality) — Let M {\textstyle M} be a subspace of V {\textstyle V} with dimension k {\textstyle k} , then there exists unit vectors x , y ∈ M {\textstyle x,y\in M} , such that ⟨ x , A x ⟩ ≤ λ k {\textstyle \langle x,Ax\rangle \leq \lambda _{k}} , and ⟨ y , A y ⟩ ≥ ξ k {\textstyle \langle y,Ay\rangle \geq \xi _{k}} . Part 2 is a corollary, using − A {\textstyle -A} . M {\textstyle M} is a k {\textstyle k} dimensional subspace, so if we pick any list of n − k + 1 {\textstyle n-k+1} vectors, their span N := s p a n ( v k , . . . v n ) {\textstyle N:=span(v_{k},...v_{n})} must intersect M {\textstyle M} on at least a single line. Take unit x ∈ M ∩ N {\textstyle x\in M\cap N} . That’s what we need. min-max theorem — λ k = max M ⊂ V dim ⁡ ( M ) = k min x ∈ M ‖ x ‖ = 1 ⟨ x , A x ⟩ = min M ⊂ V dim ⁡ ( M ) = n − k + 1 max x ∈ M ‖ x ‖ = 1 ⟨ x , A x ⟩ . {\displaystyle {\begin{aligned}\lambda _{k}&=\max _{\begin{array}{c}{\mathcal {M}}\subset V\\\operatorname {dim} ({\mathcal {M}})=k\end{array}}\min _{\begin{array}{c}x\in {\mathcal {M}}\\\|x\|=1\end{array}}\langle x,Ax\rangle \\&=\min _{\begin{array}{c}{\mathcal {M}}\subset V\\\operatorname {dim} ({\mathcal {M}})=n-k+1\end{array}}\max _{\begin{array}{c}x\in {\mathcal {M}}\\\|x\|=1\end{array}}\langle x,Ax\rangle {\text{. }}\end{aligned}}} Part 2 is a corollary of part 1, by using − A {\textstyle -A} . By Poincare’s inequality, λ k {\textstyle \lambda _{k}} is an upper bound to the right side. By setting M = s p a n ( v 1 , . . . v k ) {\textstyle {\mathcal {M}}=span(v_{1},...v_{k})} , the upper bound is achieved. Define the partial trace t r V ( A ) {\textstyle tr_{V}(A)} to be the trace of projection of A {\textstyle A} to V {\textstyle V} . It is equal to ∑ i v i ∗ A v i {\textstyle \sum _{i}v_{i}^{*}Av_{i}} given an orthonormal basis of V {\textstyle V} . Wielandt minimax formula ( [ 1 ] : 44 ) — Let 1 ≤ i 1 < ⋯ < i k ≤ n {\textstyle 1\leq i_{1}<\cdots <i_{k}\leq n} be integers. Define a partial flag to be a nested collection V 1 ⊂ ⋯ ⊂ V k {\textstyle V_{1}\subset \cdots \subset V_{k}} of subspaces of C n {\textstyle \mathbb {C} ^{n}} such that dim ⁡ ( V j ) = i j {\textstyle \operatorname {dim} \left(V_{j}\right)=i_{j}} for all 1 ≤ j ≤ k {\textstyle 1\leq j\leq k} . Define the associated Schubert variety X ( V 1 , … , V k ) {\textstyle X\left(V_{1},\ldots ,V_{k}\right)} to be the collection of all k {\textstyle k} dimensional subspaces W {\textstyle W} such that dim ⁡ ( W ∩ V j ) ≥ j {\textstyle \operatorname {dim} \left(W\cap V_{j}\right)\geq j} . λ i 1 ( A ) + ⋯ + λ i k ( A ) = sup V 1 , … , V k inf W ∈ X ( V 1 , … , V k ) t r W ( A ) {\displaystyle \lambda _{i_{1}}(A)+\cdots +\lambda _{i_{k}}(A)=\sup _{V_{1},\ldots ,V_{k}}\inf _{W\in X\left(V_{1},\ldots ,V_{k}\right)}tr_{W}(A)} The ≤ {\textstyle \leq } case. Let V j = s p a n ( e 1 , … , e i j ) {\textstyle V_{j}=span(e_{1},\dots ,e_{i_{j}})} , and any W ∈ X ( V 1 , … , V k ) {\textstyle W\in X\left(V_{1},\ldots ,V_{k}\right)} , it remains to show that λ i 1 ( A ) + ⋯ + λ i k ( A ) ≤ t r W ( A ) {\displaystyle \lambda _{i_{1}}(A)+\cdots +\lambda _{i_{k}}(A)\leq tr_{W}(A)} To show this, we construct an orthonormal set of vectors v 1 , … , v k {\textstyle v_{1},\dots ,v_{k}} such that v j ∈ V j ∩ W {\textstyle v_{j}\in V_{j}\cap W} . Then t r W ( A ) ≥ ∑ j ⟨ v j , A v j ⟩ ≥ λ i j ( A ) {\textstyle tr_{W}(A)\geq \sum _{j}\langle v_{j},Av_{j}\rangle \geq \lambda _{i_{j}}(A)} Since d i m ( V 1 ∩ W ) ≥ 1 {\textstyle dim(V_{1}\cap W)\geq 1} , we pick any unit v 1 ∈ V 1 ∩ W {\textstyle v_{1}\in V_{1}\cap W} . Next, since d i m ( V 2 ∩ W ) ≥ 2 {\textstyle dim(V_{2}\cap W)\geq 2} , we pick any unit v 2 ∈ ( V 2 ∩ W ) {\textstyle v_{2}\in (V_{2}\cap W)} that is perpendicular to v 1 {\textstyle v_{1}} , and so on. The ≥ {\textstyle \geq } case. For any such sequence of subspaces V i {\textstyle V_{i}} , we must find some W ∈ X ( V 1 , … , V k ) {\textstyle W\in X\left(V_{1},\ldots ,V_{k}\right)} such that λ i 1 ( A ) + ⋯ + λ i k ( A ) ≥ t r W ( A ) {\displaystyle \lambda _{i_{1}}(A)+\cdots +\lambda _{i_{k}}(A)\geq tr_{W}(A)} Now we prove this by induction. The n = 1 {\textstyle n=1} case is the Courant-Fischer theorem. Assume now n ≥ 2 {\textstyle n\geq 2} . If i 1 ≥ 2 {\textstyle i_{1}\geq 2} , then we can apply induction. Let E = s p a n ( e i 1 , … , e n ) {\textstyle E=span(e_{i_{1}},\dots ,e_{n})} . We construct a partial flag within E {\textstyle E} from the intersection of E {\textstyle E} with V 1 , … , V k {\textstyle V_{1},\dots ,V_{k}} . We begin by picking a ( i k − ( i 1 − 1 ) ) {\textstyle (i_{k}-(i_{1}-1))} -dimensional subspace W k ′ ⊂ E ∩ V i k {\textstyle W_{k}'\subset E\cap V_{i_{k}}} , which exists by counting dimensions. This has codimension ( i 1 − 1 ) {\textstyle (i_{1}-1)} within V i k {\textstyle V_{i_{k}}} . Then we go down by one space, to pick a ( i k − 1 − ( i 1 − 1 ) ) {\textstyle (i_{k-1}-(i_{1}-1))} -dimensional subspace W k − 1 ′ ⊂ W k ∩ V i k − 1 {\textstyle W_{k-1}'\subset W_{k}\cap V_{i_{k-1}}} . This still exists. Etc. Now since d i m ( E ) ≤ n − 1 {\textstyle dim(E)\leq n-1} , apply the induction hypothesis, there exists some W ∈ X ( W 1 , … , W k ) {\textstyle W\in X(W_{1},\dots ,W_{k})} such that λ i 1 − ( i 1 − 1 ) ( A | E ) + ⋯ + λ i k − ( i 1 − 1 ) ( A | E ) ≥ t r W ( A ) {\displaystyle \lambda _{i_{1}-(i_{1}-1)}(A|E)+\cdots +\lambda _{i_{k}-(i_{1}-1)}(A|E)\geq tr_{W}(A)} Now λ i j − ( i 1 − 1 ) ( A | E ) {\textstyle \lambda _{i_{j}-(i_{1}-1)}(A|E)} is the ( i j − ( i 1 − 1 ) ) {\textstyle (i_{j}-(i_{1}-1))} -th eigenvalue of A {\textstyle A} orthogonally projected down to E {\textstyle E} . By Cauchy interlacing theorem, λ i j − ( i 1 − 1 ) ( A | E ) ≤ λ i j ( A ) {\textstyle \lambda _{i_{j}-(i_{1}-1)}(A|E)\leq \lambda _{i_{j}}(A)} . Since X ( W 1 , … , W k ) ⊂ X ( V 1 , … , V k ) {\textstyle X(W_{1},\dots ,W_{k})\subset X(V_{1},\dots ,V_{k})} , we’re done. If i 1 = 1 {\textstyle i_{1}=1} , then we perform a similar construction. Let E = s p a n ( e 2 , … , e n ) {\textstyle E=span(e_{2},\dots ,e_{n})} . If V k ⊂ E {\textstyle V_{k}\subset E} , then we can induct. Otherwise, we construct a partial flag sequence W 2 , … , W k {\textstyle W_{2},\dots ,W_{k}} By induction, there exists some W ′ ∈ X ( W 2 , … , W k ) ⊂ X ( V 2 , … , V k ) {\textstyle W'\in X(W_{2},\dots ,W_{k})\subset X(V_{2},\dots ,V_{k})} , such that λ i 2 − 1 ( A | E ) + ⋯ + λ i k − 1 ( A | E ) ≥ t r W ′ ( A ) {\displaystyle \lambda _{i_{2}-1}(A|E)+\cdots +\lambda _{i_{k}-1}(A|E)\geq tr_{W'}(A)} thus λ i 2 ( A ) + ⋯ + λ i k ( A ) ≥ t r W ′ ( A ) {\displaystyle \lambda _{i_{2}}(A)+\cdots +\lambda _{i_{k}}(A)\geq tr_{W'}(A)} And it remains to find some v {\textstyle v} such that W ′ ⊕ v ∈ X ( V 1 , … , V k ) {\textstyle W'\oplus v\in X(V_{1},\dots ,V_{k})} . If V 1 ⊄ W ′ {\textstyle V_{1}\not \subset W'} , then any v ∈ V 1 ∖ W ′ {\textstyle v\in V_{1}\setminus W'} would work. Otherwise, if V 2 ⊄ W ′ {\textstyle V_{2}\not \subset W'} , then any v ∈ V 2 ∖ W ′ {\textstyle v\in V_{2}\setminus W'} would work, and so on. If none of these work, then it means V k ⊂ E {\textstyle V_{k}\subset E} , contradiction. This has some corollaries: [ 1 ] : 44 Extremal partial trace — λ 1 ( A ) + ⋯ + λ k ( A ) = sup dim ⁡ ( V ) = k t r V ( A ) {\displaystyle \lambda _{1}(A)+\dots +\lambda _{k}(A)=\sup _{\operatorname {dim} (V)=k}tr_{V}(A)} ξ 1 ( A ) + ⋯ + ξ k ( A ) = inf dim ⁡ ( V ) = k t r V ( A ) {\displaystyle \xi _{1}(A)+\dots +\xi _{k}(A)=\inf _{\operatorname {dim} (V)=k}tr_{V}(A)} Corollary — The sum λ 1 ( A ) + ⋯ + λ k ( A ) {\textstyle \lambda _{1}(A)+\dots +\lambda _{k}(A)} is a convex function, and ξ 1 ( A ) + ⋯ + ξ k ( A ) {\textstyle \xi _{1}(A)+\dots +\xi _{k}(A)} is concave. (Schur-Horn inequality) ξ 1 ( A ) + ⋯ + ξ k ( A ) ≤ a i 1 , i 1 + ⋯ + a i k , i k ≤ λ 1 ( A ) + ⋯ + λ k ( A ) {\displaystyle \xi _{1}(A)+\dots +\xi _{k}(A)\leq a_{i_{1},i_{1}}+\dots +a_{i_{k},i_{k}}\leq \lambda _{1}(A)+\dots +\lambda _{k}(A)} for any subset of indices. Equivalently, this states that the diagonal vector of A {\textstyle A} is majorized by its eigenspectrum. Schatten-norm Hölder inequality — Given Hermitian A , B {\textstyle A,B} and Hölder pair 1 / p + 1 / q = 1 {\textstyle 1/p+1/q=1} , | tr ⁡ ( A B ) | ≤ ‖ A ‖ S p ‖ B ‖ S q {\displaystyle |\operatorname {tr} (AB)|\leq \|A\|_{S^{p}}\|B\|_{S^{q}}} WLOG, B {\textstyle B} is diagonalized, then we need to show | ∑ i B i i A i i | ≤ ‖ A ‖ S p ‖ ( B i i ) ‖ l q {\textstyle |\sum _{i}B_{ii}A_{ii}|\leq \|A\|_{S^{p}}\|(B_{ii})\|_{l^{q}}} By the standard Hölder inequality, it suffices to show ‖ ( A i i ) ‖ l p ≤ ‖ A ‖ S p {\textstyle \|(A_{ii})\|_{l^{p}}\leq \|A\|_{S^{p}}} By the Schur-Horn inequality, the diagonals of A {\textstyle A} are majorized by the eigenspectrum of A {\textstyle A} , and since the map f ( x 1 , … , x n ) = ‖ x ‖ p {\textstyle f(x_{1},\dots ,x_{n})=\|x\|_{p}} is symmetric and convex, it is Schur-convex. Let N be the nilpotent matrix Define the Rayleigh quotient R N ( x ) {\displaystyle R_{N}(x)} exactly as above in the Hermitian case. Then it is easy to see that the only eigenvalue of N is zero, while the maximum value of the Rayleigh quotient is ⁠ 1 / 2 ⁠ . That is, the maximum value of the Rayleigh quotient is larger than the maximum eigenvalue. The singular values { σ k } of a square matrix M are the square roots of the eigenvalues of M * M (equivalently MM* ). An immediate consequence [ citation needed ] of the first equality in the min-max theorem is: Similarly, Here σ k ↓ {\displaystyle \sigma _{k}^{\downarrow }} denotes the k th entry in the decreasing sequence of the singular values, so that σ 1 ↓ ≥ σ 2 ↓ ≥ ⋯ {\displaystyle \sigma _{1}^{\downarrow }\geq \sigma _{2}^{\downarrow }\geq \cdots } . Let A be a symmetric n × n matrix. The m × m matrix B , where m ≤ n , is called a compression of A if there exists an orthogonal projection P onto a subspace of dimension m such that PAP* = B . The Cauchy interlacing theorem states: This can be proven using the min-max principle. Let β i have corresponding eigenvector b i and S j be the j dimensional subspace S j = span{ b 1 , ..., b j }, then According to first part of min-max, α j ≤ β j . On the other hand, if we define S m − j +1 = span{ b j , ..., b m }, then where the last inequality is given by the second part of min-max. When n − m = 1 , we have α j ≤ β j ≤ α j +1 , hence the name interlacing theorem. Lidskii inequality — If 1 ≤ i 1 < ⋯ < i k ≤ n {\textstyle 1\leq i_{1}<\cdots <i_{k}\leq n} then λ i 1 ( A + B ) + ⋯ + λ i k ( A + B ) ≤ λ i 1 ( A ) + ⋯ + λ i k ( A ) + λ 1 ( B ) + ⋯ + λ k ( B ) {\displaystyle {\begin{aligned}&\lambda _{i_{1}}(A+B)+\cdots +\lambda _{i_{k}}(A+B)\\&\quad \leq \lambda _{i_{1}}(A)+\cdots +\lambda _{i_{k}}(A)+\lambda _{1}(B)+\cdots +\lambda _{k}(B)\end{aligned}}} λ i 1 ( A + B ) + ⋯ + λ i k ( A + B ) ≥ λ i 1 ( A ) + ⋯ + λ i k ( A ) + ξ 1 ( B ) + ⋯ + ξ k ( B ) {\displaystyle {\begin{aligned}&\lambda _{i_{1}}(A+B)+\cdots +\lambda _{i_{k}}(A+B)\\&\quad \geq \lambda _{i_{1}}(A)+\cdots +\lambda _{i_{k}}(A)+\xi _{1}(B)+\cdots +\xi _{k}(B)\end{aligned}}} The second is the negative of the first. The first is by Wielandt minimax. λ i 1 ( A + B ) + ⋯ + λ i k ( A + B ) = sup V 1 , … , V k inf W ∈ X ( V 1 , … , V k ) ( t r W ( A ) + t r W ( B ) ) = sup V 1 , … , V k ( inf W ∈ X ( V 1 , … , V k ) t r W ( A ) + t r W ( B ) ) ≤ sup V 1 , … , V k ( inf W ∈ X ( V 1 , … , V k ) t r W ( A ) + ( λ 1 ( B ) + ⋯ + λ k ( B ) ) ) = λ i 1 ( A ) + ⋯ + λ i k ( A ) + λ 1 ( B ) + ⋯ + λ k ( B ) {\displaystyle {\begin{aligned}&\lambda _{i_{1}}(A+B)+\cdots +\lambda _{i_{k}}(A+B)\\=&\sup _{V_{1},\dots ,V_{k}}\inf _{W\in X(V_{1},\dots ,V_{k})}(tr_{W}(A)+tr_{W}(B))\\=&\sup _{V_{1},\dots ,V_{k}}(\inf _{W\in X(V_{1},\dots ,V_{k})}tr_{W}(A)+tr_{W}(B))\\\leq &\sup _{V_{1},\dots ,V_{k}}(\inf _{W\in X(V_{1},\dots ,V_{k})}tr_{W}(A)+(\lambda _{1}(B)+\cdots +\lambda _{k}(B)))\\=&\lambda _{i_{1}}(A)+\cdots +\lambda _{i_{k}}(A)+\lambda _{1}(B)+\cdots +\lambda _{k}(B)\end{aligned}}} Note that ∑ i λ i ( A + B ) = t r ( A + B ) = ∑ i λ i ( A ) + λ i ( B ) {\displaystyle \sum _{i}\lambda _{i}(A+B)=tr(A+B)=\sum _{i}\lambda _{i}(A)+\lambda _{i}(B)} . In other words, λ ( A + B ) − λ ( A ) ⪯ λ ( B ) {\displaystyle \lambda (A+B)-\lambda (A)\preceq \lambda (B)} where ⪯ {\displaystyle \preceq } means majorization . By the Schur convexity theorem, we then have p-Wielandt-Hoffman inequality — ‖ λ ( A + B ) − λ ( A ) ‖ ℓ p ≤ ‖ B ‖ S p {\textstyle \|\lambda (A+B)-\lambda (A)\|_{\ell ^{p}}\leq \|B\|_{S^{p}}} where ‖ ⋅ ‖ S p {\textstyle \|\cdot \|_{S^{p}}} stands for the p-Schatten norm. Let A be a compact , Hermitian operator on a Hilbert space H . Recall that the spectrum of such an operator (the set of eigenvalues) is a set of real numbers whose only possible cluster point is zero. It is thus convenient to list the positive eigenvalues of A as where entries are repeated with multiplicity , as in the matrix case. (To emphasize that the sequence is decreasing, we may write λ k = λ k ↓ {\displaystyle \lambda _{k}=\lambda _{k}^{\downarrow }} .) When H is infinite-dimensional, the above sequence of eigenvalues is necessarily infinite. We now apply the same reasoning as in the matrix case. Letting S k ⊂ H be a k dimensional subspace, we can obtain the following theorem. A similar pair of equalities hold for negative eigenvalues. Let S' be the closure of the linear span S ′ = span ⁡ { u k , u k + 1 , … } {\displaystyle S'=\operatorname {span} \{u_{k},u_{k+1},\ldots \}} . The subspace S' has codimension k − 1. By the same dimension count argument as in the matrix case, S' ∩ S k has positive dimension. So there exists x ∈ S' ∩ S k with ‖ x ‖ = 1 {\displaystyle \|x\|=1} . Since it is an element of S' , such an x necessarily satisfy Therefore, for all S k But A is compact, therefore the function f ( x ) = ( Ax , x ) is weakly continuous. Furthermore, any bounded set in H is weakly compact. This lets us replace the infimum by minimum: So Because equality is achieved when S k = span ⁡ { u 1 , … , u k } {\displaystyle S_{k}=\operatorname {span} \{u_{1},\ldots ,u_{k}\}} , This is the first part of min-max theorem for compact self-adjoint operators. Analogously, consider now a ( k − 1) -dimensional subspace S k −1 , whose the orthogonal complement is denoted by S k −1 ⊥ . If S' = span{ u 1 ... u k }, So This implies where the compactness of A was applied. Index the above by the collection of k-1 -dimensional subspaces gives Pick S k −1 = span{ u 1 , ..., u k −1 } and we deduce The min-max theorem also applies to (possibly unbounded) self-adjoint operators. [ 2 ] [ 3 ] Recall the essential spectrum is the spectrum without isolated eigenvalues of finite multiplicity. Sometimes we have some eigenvalues below the essential spectrum, and we would like to approximate the eigenvalues and eigenfunctions. E n = min ψ 1 , … , ψ n max { ⟨ ψ , A ψ ⟩ : ψ ∈ span ⁡ ( ψ 1 , … , ψ n ) , ‖ ψ ‖ = 1 } {\displaystyle E_{n}=\min _{\psi _{1},\ldots ,\psi _{n}}\max\{\langle \psi ,A\psi \rangle :\psi \in \operatorname {span} (\psi _{1},\ldots ,\psi _{n}),\,\|\psi \|=1\}} . If we only have N eigenvalues and hence run out of eigenvalues, then we let E n := inf σ e s s ( A ) {\displaystyle E_{n}:=\inf \sigma _{ess}(A)} (the bottom of the essential spectrum) for n>N , and the above statement holds after replacing min-max with inf-sup. E n = max ψ 1 , … , ψ n − 1 min { ⟨ ψ , A ψ ⟩ : ψ ⊥ ψ 1 , … , ψ n − 1 , ‖ ψ ‖ = 1 } {\displaystyle E_{n}=\max _{\psi _{1},\ldots ,\psi _{n-1}}\min\{\langle \psi ,A\psi \rangle :\psi \perp \psi _{1},\ldots ,\psi _{n-1},\,\|\psi \|=1\}} . If we only have N eigenvalues and hence run out of eigenvalues, then we let E n := inf σ e s s ( A ) {\displaystyle E_{n}:=\inf \sigma _{ess}(A)} (the bottom of the essential spectrum) for n > N , and the above statement holds after replacing max-min with sup-inf. The proofs [ 2 ] [ 3 ] use the following results about self-adjoint operators: inf σ ( A ) = inf ψ ∈ D ( A ) , ‖ ψ ‖ = 1 ⟨ ψ , A ψ ⟩ {\displaystyle \inf \sigma (A)=\inf _{\psi \in {\mathfrak {D}}(A),\|\psi \|=1}\langle \psi ,A\psi \rangle } and sup σ ( A ) = sup ψ ∈ D ( A ) , ‖ ψ ‖ = 1 ⟨ ψ , A ψ ⟩ {\displaystyle \sup \sigma (A)=\sup _{\psi \in {\mathfrak {D}}(A),\|\psi \|=1}\langle \psi ,A\psi \rangle } . [ 2 ] : 77
https://en.wikipedia.org/wiki/Min-max_theorem
The Min System is a mechanism composed of three proteins MinC , MinD , and MinE used by E. coli as a means of properly localizing the septum prior to cell division . Each component participates in generating a dynamic oscillation of FtsZ protein inhibition between the two bacterial poles to precisely specify the mid-zone of the cell, allowing the cell to accurately divide in two. This system is known to function in conjunction with a second negative regulatory system, the nucleoid occlusion system (NO), to ensure proper spatial and temporal regulation of chromosomal segregation and division. The initial discovery of this family of proteins is attributed to Adler et al. (1967). First identified as E. coli mutants that could not produce a properly localized septum, resulting in the generation of minicells [ 1 ] [ 2 ] due to mislocalized cell division occurring near the bacterial poles. This caused miniature vesicles to pinch off, void of essential molecular constituents permitting it to exist as a viable bacterial cell. Minicells are achromosomal cells that are products of aberrant cell division, and contain RNA and protein, but little or no chromosomal DNA . This finding led to the identification of three interacting proteins involved in a dynamic system of localizing the mid-zone of the cell for properly controlled cell division. [ citation needed ] The Min proteins prevent the FtsZ ring from being placed anywhere but near the mid cell and are hypothesized to be involved in a spatial regulatory mechanism that links size increases prior to cell division to FtsZ polymerization in the middle of the cell. [ citation needed ] One model of Z-ring formation permits its formation only after a certain spatial signal that tells the cell that it is big enough to divide. [ 3 ] The MinCDE system prevents FtsZ polymerization near certain parts of the plasma membrane. MinD localizes to the membrane only at cell poles and contains an ATPase and an ATP-binding domain. MinD is only able to bind to the membrane when in its ATP-bound conformation. Once anchored, the protein polymerizes, resulting in clusters of MinD. These clusters bind and then activate another protein called MinC , which has activity only when bound by MinD. [ 4 ] MinC serves as a FtsZ inhibitor that prevents FtsZ polymerization. The high concentration of a FtsZ polymerization inhibitor at the poles prevents FtsZ from initiating division at anywhere but the mid-cell. [ 5 ] MinE is involved in preventing the formation of MinCD complexes in the middle of the cell. MinE forms a ring near each cell pole. This ring is not like the Z-ring. Instead, it catalyzes the release of MinD from the membrane by activating MinD's ATPase. This hydrolyzes the MinD's bound ATP, preventing it from anchoring itself to the membrane. MinE prevents the MinD/C complex from forming in the center but allows it to stay at the poles. Once the MinD/C complex is released, MinC becomes inactivated. This prevents MinC from deactivating FtsZ. As a consequence, this activity imparts regional specificity to Min localization. [ 6 ] Thus, FtsZ can form only in the center, where the concentration of the inhibitor MinC is minimal. Mutations that prevent the formation of MinE rings result in the MinCD zone extending well beyond the polar zones, preventing FtsZ to polymerize and to perform cell division. [ 7 ] MinD requires a nucleotide exchange step to re-bind to ATP so that it can re-associate with the membrane after MinE release. The time lapse results in a periodicity of Min association that may yield clues to a temporal signal linked to a spatial signal. In vivo observations show that the oscillation of Min proteins between cell poles occurs approximately every 50 seconds. [ 8 ] Oscillation of Min proteins, however, is not necessary for all bacterial cell division systems. Bacillus subtilis has been shown to have static concentrations of MinC and MinD at the cell poles. [ 9 ] This system still links cell size to the ability to form a septum via FtsZ and divide. The dynamic behavior of Min proteins has been reconstituted in vitro using artificial lipid bilayers, [ 10 ] with varying lipid composition [ 11 ] and different confinement geometry [ 12 ] as mimics for the cell membrane. The first pattern to be reconstituted were spiraling waves of MinD chased by MinE, [ 13 ] followed by the reconstitution of waves of all three proteins, MinD, MinE and MinC. [ 14 ] Importantly, MinD and MinE can self-organize into a wide variety of patterns depending on the reaction conditions. [ 15 ] [ 16 ] Additional study is required to elucidate the extent of temporal and spatial signaling permissible by this biological function. These in vitro systems offered unprecedented access to features such as residence times and molecular motility.
https://en.wikipedia.org/wiki/Min_System
Mina Bizic is an environmental microbiologist with particular interest in aquatic systems. She is mostly known for her work on organic matter particles and oxic methane production. Since July 2024, she is a Full Professor at the Technische Universität Berlin and Chair of Environmental Microbiomics at the Institute of Environmental Technology. [ 1 ] [ 2 ] [ 3 ] She was named a fellow of the Association for the Sciences of Limnology and Oceanography (ASLO) in 2022, and is serving on the ASLO board of directors where she is chairing the Early Career Committee. Bizic completed her Diploma studies in General Biology, and Hydroecology and Water Protection at the University of Belgrade from 1999 to 2005. Following her diploma, in 2005–2006, she engaged in transdisciplinary academic research in Ancient Jewish texts studies at the European Institute for Jewish Studies in Sweden (PAIDEIA). [ 4 ] She later moved to Israel, where she worked for three years at the Kinneret Limnological Laboratory (KLL) of the Israel Oceanographic and Limnological Research [ 5 ] (IOLR). Subsequently, Bizic earned her Ph.D. from the Max Planck Institute for Marine Microbiology in Bremen and the University of Oldenburg as part of The International Max Planck Research School of Marine Microbiology ( MarMic ) Her doctoral thesis was titled " Polyphasic comparison of limnic and marine particle-associated bacteria ". [ 6 ] Following her Ph.D., she conducted a postdoc at the Leibniz Institute of Freshwater Ecology and Inland Fisheries ( IGB ). In 2019, Bizic obtained a DFG German Research Foundation independent researcher grant. [ 7 ] During her investigations into marine and lake snow, Bizic and her collaborators developed a novel experimental device, a flow-through rolling tank, which facilitates long-term experiments on marine and lake snow [ 8 ] while addressing biases inherent in closed systems, commonly referred to as the bottle effect. [ 9 ] Using this device, Bizic demonstrated that microbial degradation of marine snow takes longer than predicted using closed experimental systems. This finding implies that the biological carbon pump may sequester more carbon than experimentally estimated. In a subsequent study, Bizic and her colleagues conducted the first research utilizing molecular tools to focus on individual marine and lake snow particles rather than pooling thousands together. This groundbreaking study revealed that particles from the same source are colonized by different bacteria, in what appears to be a stochastic colonization process. Furthermore, the study highlighted that, at the early stages of colonization, bacterial succession is primarily driven by competition rather than a change in the quality of available organic matter. [ 10 ] In parallel with her work on marine and lake snow, Bizic has delved into aerobic methane production , [ 11 ] a phenomenon known as "The Methane Paradox". [ 12 ] This process is increasingly recognized as a significant source of the potent greenhouse gas methane in aquatic systems. Bizic and her colleagues were the first to demonstrate the conversion of methylamines to methane under aerobic conditions. [ 13 ] This process was later comprehensively characterized by Wang and colleagues. [ 14 ] Subsequently, Bizic and her team revealed that cyanobacteria , the most abundant photosynthetic organisms on Earth, emit methane as a byproduct of photosynthesis . [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] The implications of this discovery were explored by Bizic in a subsequent opinion paper. [ 22 ] In 2022, Bizic was elected to the Board of Directors of the Association for the Sciences of Limnology and Oceanography (ASLO). [ 23 ] She currently serves as a member at large and chairs the Early Career committee of ASLO, organizing activities for the benefit of early career scientists such as the promotion of early creer scientist from historically excluded groups [ 24 ] as well as organizing webinars to improve the mental well-being of scientists. [ 25 ] Bizic's contributions to aquatic research and to ASLO were further acknowledged when she was named an ASLO fellow in 2022. [ 26 ] Beyond her academic endeavors, Bizic is involved in the Global Lake Ecological Observatory Network (GLEON), and as of 2024, serves as a member in their committee for inclusive collaboration. [ 27 ] Bizic has participated in interviews, events and panel discussions addressing the role of women in science such as the Marthe Vogt podcast [ 28 ] [ 29 ] and the Soapbox Science . [ 30 ] [ 31 ] Mina Bizic was born in Belgrade , Serbia, and has lived in Sweden and Israel. In 2009, she relocated to Germany. She is the sibling of opera singer David Bizic and is married to fellow scientist Danny Ionescu , with whom she has two children. [ 28 ] Notes Sources
https://en.wikipedia.org/wiki/Mina_Bizic
A mind-controlled wheelchair is a motorized wheelchair controlled by a brain–computer interface . Such a wheelchair could be of great importance to patients with locked-in syndrome (LIS), in which a patient is aware but cannot move or communicate verbally due to complete paralysis of nearly all voluntary muscles in the body except the eyes. Such wheelchairs can also be used in case of muscular dystrophy , a disease that weakens the musculoskeletal system and hampers locomotion. The technology behind brain or mind control goes back to at least 2002, when researchers implanted electrodes into the brains of macaque monkeys, which enabled them to control a cursor on a computer screen. Similar techniques were able to control robotic arms and simple joysticks. [ 1 ] In 2009, researchers at the University of South Florida developed a wheelchair-mounted robotic arm that captured the user's brain waves and converted them into robotic movements. The Brain-Computer Interface (BCI), which captures P-300 brain wave responses and converts them to actions, was developed by USF psychology professor Emanuel Donchin and colleagues. The P-300 brain signal serves a virtual "finger" for patients who cannot move, such as those with locked-in syndrome or those with Lou Gehrig's Disease (ALS). [ 2 ] The first mind-controlled wheelchair reached production in 2016. It was designed by Diwakar Vaish , Head of Robotics and Research at A-SET Training & Research Institutes. [ 3 ] [ 4 ] [ 5 ] In November of 2022, the University of Texas at Austin developed a mind-controlled wheelchair using an EEG device. [ 6 ] In addition, March of 2022 saw a paper from Clarkson University planning the design of a mind-controlled wheelchair also using an EEG. [ 7 ] A mind-controlled wheelchair functions using a brain–computer interface : an electroencephalogram (EEG) worn on the user's forehead detects neural impulses that reach the scalp allowing the micro-controller on board to detect the user's thought process, interpret it, and control the wheelchair's movement. In November of 2022 the University of Texas in Austin conducted a study on the effectiveness of a model of mind-controlled wheelchair. Similar to the BCI, the machine translates brain waves into movements. Specifically, the participants were instructed to visualize moving extremities to prompt the wheelchair to move. This study saw the use of non-invasive electrodes, using an electroencephalogram cap as opposed to internally installed electrodes. [ 6 ] In March of 2022, Stoyell et al. at Clarkson University published a paper in which they planned a design of a mind-controlled wheelchair based on an Emotiv EPOC X headset, an electroencephalogram device. [ 7 ] The A-SET wheelchair comes standard with many different types of sensors , like temperature sensors , sound sensors and an array of distance sensors which detect any unevenness in the surface. The chair automatically avoids stairs and steep inclines. It also has a "safety switch": in case of danger, the user can close his eyes quickly to trigger an emergency stop. In the case of the chair designed by Stoyell et al., the only equipment needed to use the chair is the EMOTIV EPOC X headset. Both the University of Texas' and Clarkson University's designs have the benefit of being noninvasive, with the electrodes being placed onto the head as opposed to being surgically implanted. This makes these products relatively more accessible. [ 6 ] [ 7 ]
https://en.wikipedia.org/wiki/Mind-controlled_wheelchair
MindSphere is an industrial IoT -as-a-service solution [ 1 ] developed by Siemens for applications in the context of the Internet of Things ( IoT ). [ 2 ] MindSphere stores operational data and makes it accessible through digital applications (“MindSphere applications”) to allow industrial customers to make decisions based on valuable factual information. [ 3 ] The system is used in applications such as automated production and vehicle fleet management. [ 2 ] [ 4 ] Assets can be securely connected to MindSphere with auxiliary MindSphere products that collect and transfer relevant machine and plant data. [ 2 ] Examples include real-time telemetric data from moving assets like cars, time series data and geographical data, which can be used for predictive maintenance or to develop new analytical tools. [ 4 ] [ 5 ] MindSphere is now known as Insights Hub. [ 6 ] As an industrial IoT as a service solution, MindSphere collects and analyzes all kinds of sensor data in real time. [ 4 ] This information can be used to optimize products, production assets and manufacturing processes along the entire value chain. [ 7 ] MindSphere’s open application interfaces make it possible to obtain data from machines, plants or entire fleets irrespective of the manufacturer. [ 2 ] These interfaces include OPC Foundation ’s OPC Unified Architecture ( OPC UA ). [ 8 ] To help customers create their own software applications and services, MindSphere is equipped with open application programming interfaces (APIs) and development tools. [ 2 ] [ 3 ] This allows OEMs to integrate their own technology. [ 9 ] MindSphere is based on the concept of closed feedback loops enabling the bi-directional data flow between production and development: [ 10 ] Real-world plants, machines and equipment can be connected to MindSphere in order to extract operational data. [ 2 ] Valuable information (i.e., “ digital twins ” of machines) can then be extrapolated from the raw data through analytics and utilized to optimize products as well as production processes and environments in the next cycle of innovation. [ 3 ] [ 7 ] [ 11 ]
https://en.wikipedia.org/wiki/MindSphere
A mind map is a diagram used to visually organize information into a hierarchy , showing relationships among pieces of the whole. [ 1 ] It is often based on a single concept, drawn as an image in the center of a blank page, to which associated representations of ideas such as images, words and parts of words are added. Major ideas are connected directly to the central concept, and other ideas branch out from those major ideas. Mind maps can also be drawn by hand, either as "notes" during a lecture, meeting or planning session, for example, or as higher quality pictures when more time is available. Mind maps are considered to be a type of spider diagram . [ 2 ] Although the term "mind map" was first popularized by British popular psychology author and television personality Tony Buzan , [ 3 ] [ 4 ] the use of diagrams that visually "map" information using branching and radial maps traces back centuries. [ 5 ] These pictorial methods record knowledge and model systems, and have a long history in learning, brainstorming , memory , visual thinking , and problem solving by educators, engineers, psychologists, and others. Some of the earliest examples of such graphical records were developed by Porphyry of Tyros , a noted thinker of the 3rd century, as he graphically visualized the concept categories of Aristotle . [ 5 ] Philosopher Ramon Llull (1235–1315) also used such techniques. [ 5 ] Buzan's specific approach, and the introduction of the term "mind map", started with a 1974 BBC TV series he hosted, called Use Your Head . [ 6 ] In this show, and companion book series, Buzan promoted his conception of radial tree, diagramming key words in a colorful, radiant, tree-like structure. [ 7 ] Cunningham (2005) conducted a user study in which 80% of the students thought "mindmapping helped them understand concepts and ideas in science". [ 10 ] Other studies also report some subjective positive effects of the use of mind maps. [ 11 ] [ 12 ] Positive opinions on their effectiveness, however, were much more prominent among students of art and design than in students of computer and information technology, with 62.5% vs 34% (respectively) agreeing that they were able to understand concepts better with mind mapping software. [ 11 ] Farrand, Hussain, and Hennessy (2002) found that spider diagrams (similar to concept maps) had limited, but significant, impact on memory recall in undergraduate students (a 10% increase over baseline for a 600-word text only) as compared to preferred study methods (a 6% increase over baseline). [ 13 ] This improvement was only robust after a week for those in the diagram group and there was a significant decrease in motivation compared to the subjects' preferred methods of note taking . A meta study about concept mapping concluded that concept mapping is more effective than "reading text passages, attending lectures, and participating in class discussions". [ 14 ] The same study also concluded that concept mapping is slightly more effective "than other constructive activities such as writing summaries and outlines". However, results were inconsistent, with the authors noting "significant heterogeneity was found in most subsets". In addition, they concluded that low-ability students may benefit more from mind mapping than high-ability students. Joeran Beel and Stefan Langer conducted a comprehensive analysis of the content of mind maps. [ 15 ] They analysed 19,379 mind maps from 11,179 users of the mind mapping applications SciPlore MindMapping (now Docear ) and MindMeister . Results include that average users create only a few mind maps (mean=2.7), average mind maps are rather small (31 nodes) with each node containing about three words (median). However, there were exceptions. One user created more than 200 mind maps, the largest mind map consisted of more than 50,000 nodes and the largest node contained ~7,500 words. The study also showed that between different mind mapping applications (Docear vs MindMeister) significant differences exist related to how users create mind maps. There have been some attempts to create mind maps automatically. Brucks & Schommer created mind maps automatically from full-text streams. [ 16 ] Rothenberger et al. extracted the main story of a text and presented it as mind map. [ 17 ] There is also a patent application about automatically creating sub-topics in mind maps. [ 18 ] Mind-mapping software can be used to organize large amounts of information, combining spatial organization, dynamic hierarchical structuring and node folding. Software packages can extend the concept of mind-mapping by allowing individuals to map more than thoughts and ideas with information on their computers and the Internet, like spreadsheets, documents, Internet sites, images and videos. [ 19 ] It has been suggested that mind-mapping can improve learning/study efficiency up to 15% over conventional note-taking . [ 13 ] The following dozen examples of mind maps show the range of styles that a mind map may take, from hand-drawn to computer-generated and from mostly text to highly illustrated. Despite their stylistic differences, all of the examples share a tree structure that hierarchically connects sub-topics to a main topic.
https://en.wikipedia.org/wiki/Mind_map
MindNet is the name of several automatically acquired databases of lexico-semantic relations [ clarification needed ] developed by members of the Natural Language Processing Group at Microsoft Research during the 1990s. [ 1 ] [ 2 ] [ 3 ] It is considered one of the world's largest lexicons and databases that could make automatic semantic descriptions along with WordNet , FrameNet , HowNet, and Integrated Linguistic Database. [ 4 ] It is particularly distinguished from WordNet by the way it was created automatically from a dictionary. [ 5 ] MindNet was designed to be continuously extended. It was first built out of the Longman Dictionary of Contemporary English (LDOCE) and later included American Heritage and the full text of Microsoft Encarta . [ 6 ] The system can analyze linguistic representations of arbitrary text. [ 6 ] The underlying technology is based on the same parser used in the Microsoft Word grammar checker and was deployed in the natural language query engine in Microsoft's Encarta 99 encyclopedia. [ 7 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mindnet
The mind–body problem is a philosophical problem concerning the relationship between thought and consciousness in the human mind and body . [ 1 ] [ 2 ] It addresses the nature of consciousness, mental states, and their relation to the physical brain and nervous system . The problem centers on understanding how immaterial thoughts and feelings can interact with the material world, or whether they are ultimately physical phenomena. This problem has been a central issue in philosophy of mind since the 17th century, particularly following René Descartes' formulation of dualism , which proposes that mind and body are fundamentally distinct substances. Other major philosophical positions include monism , which encompasses physicalism (everything is ultimately physical) and idealism (everything is ultimately mental). More recent approaches include functionalism , property dualism , and various non-reductive theories. The mind-body problem raises fundamental questions about causation between mental and physical events, the nature of consciousness, personal identity , and free will . It remains significant in both philosophy and science, influencing fields such as cognitive science , neuroscience , psychology , and artificial intelligence . In general, the existence of these mind–body connections seems unproblematic. Issues arise, however, when attempting to interpret these relations from a metaphysical or scientific perspective. Such reflections raise a number of questions, including: These and other questions that discuss the relation between mind and body are questions that all fall under the banner of the 'mind–body problem'. Philosophers David L. Robb and John F. Heil introduce mental causation in terms of the mind–body problem of interaction: Mind–body interaction has a central place in our pretheoretic conception of agency. Indeed, mental causation often figures explicitly in formulations of the mind–body problem. Some philosophers insist that the very notion of psychological explanation turns on the intelligibility of mental causation. If your mind and its states, such as your beliefs and desires, were causally isolated from your bodily behavior, then what goes on in your mind could not explain what you do. If psychological explanation goes, so do the closely related notions of agency and moral responsibility. Clearly, a good deal rides on a satisfactory solution to the problem of mental causation [and] there is more than one way in which puzzles about the mind's "causal relevance" to behavior (and to the physical world more generally) can arise. [René Descartes] set the agenda for subsequent discussions of the mind–body relation. According to Descartes, minds and bodies are distinct kinds of "substance". Bodies, he held, are spatially extended substances, incapable of feeling or thought; minds, in contrast, are unextended, thinking, feeling substances. If minds and bodies are radically different kinds of substance, however, it is not easy to see how they "could" causally interact. Princess Elizabeth of Bohemia puts it forcefully to him in a 1643 letter: how the human soul can determine the movement of the animal spirits in the body so as to perform voluntary acts—being as it is merely a conscious substance. For the determination of movement seems always to come about from the moving body's being propelled—to depend on the kind of impulse it gets from what sets it in motion, or again, on the nature and shape of this latter thing's surface. Now the first two conditions involve contact, and the third involves that the impelling thing has extension; but you utterly exclude extension from your notion of soul, and contact seems to me incompatible with a thing's being immaterial... Elizabeth is expressing the prevailing mechanistic view as to how causation of bodies works. Causal relations countenanced by contemporary physics can take several forms, not all of which are of the push–pull variety. [ 3 ] Contemporary neurophilosopher Georg Northoff suggests that mental causation is compatible with classical formal and final causality. [ 4 ] Biologist, theoretical neuroscientist and philosopher, Walter J. Freeman , suggests that explaining mind–body interaction in terms of "circular causation" is more relevant than linear causation. [ 5 ] In neuroscience , much has been learned about correlations between brain activity and subjective, conscious experiences. Many suggest that neuroscience will ultimately explain consciousness: "...consciousness is a biological process that will eventually be explained in terms of molecular signaling pathways used by interacting populations of nerve cells..." [ 6 ] However, this view has been criticized because consciousness has yet to be shown to be a process , [ 7 ] and the "hard problem" of relating consciousness directly to brain activity remains elusive. [ 8 ] Cognitive science today gets increasingly interested in the embodiment of human perception, thinking, and action. Abstract information processing models are no longer accepted as satisfactory accounts of the human mind. Interest has shifted to interactions between the material human body and its surroundings and to the way in which such interactions shape the mind. Proponents of this approach have expressed the hope that it will ultimately dissolve the Cartesian divide between the immaterial mind and the material existence of human beings (Damasio, 1994; Gallagher, 2005). A topic that seems particularly promising for providing a bridge across the mind–body cleavage is the study of bodily actions, which are neither reflexive reactions to external stimuli nor indications of mental states, which have only arbitrary relationships to the motor features of the action (e.g., pressing a button for making a choice response). The shape, timing, and effects of such actions are inseparable from their meaning. One might say that they are loaded with mental content, which cannot be appreciated other than by studying their material features. Imitation, communicative gesturing, and tool use are examples of these kinds of actions. [ 9 ] Since 1927, at the Solvay Conference in Austria, European physicists of the late 19th and early 20th centuries realized that the interpretations of their experiments with light and electricity required a different theory to explain why light behaves both as a wave and particle. The implications were profound. The usual empirical model of explaining natural phenomena could not account for this duality of matter and non-matter. In a significant way, this has brought back the conversation on the mind–body duality. [ 10 ] [ page needed ] The neural correlates of consciousness "are the smallest set of brain mechanisms and events sufficient for some specific conscious feeling, as elemental as the color red or as complex as the sensual, mysterious, and primeval sensation evoked when looking at [a] jungle scene..." [ 12 ] Neuroscientists use empirical approaches to discover neural correlates of subjective phenomena. [ 13 ] A science of consciousness must explain the exact relationship between subjective conscious mental states and brain states formed by electrochemical interactions in the body, the so-called hard problem of consciousness . [ 14 ] Neurobiology studies the connection scientifically, as do neuropsychology and neuropsychiatry . Neurophilosophy is the interdisciplinary study of neuroscience and philosophy of mind . In this pursuit, neurophilosophers, such as Patricia Churchland , [ 15 ] [ 16 ] Paul Churchland [ 17 ] and Daniel Dennett , [ 18 ] [ 19 ] have focused primarily on the body rather than the mind. In this context, neuronal correlates may be viewed as causing consciousness, where consciousness can be thought of as an undefined property that depends upon this complex , adaptive, and highly interconnected biological system. [ 20 ] However, it's unknown if discovering and characterizing neural correlates may eventually provide a theory of consciousness that can explain the first-person experience of these "systems", and determine whether other systems of equal complexity lack such features. The massive parallelism of neural networks allows redundant populations of neurons to mediate the same or similar percepts. Nonetheless, it is assumed that every subjective state will have associated neural correlates, which can be manipulated to artificially inhibit or induce the subject's experience of that conscious state. The growing ability of neuroscientists to manipulate neurons using methods from molecular biology in combination with optical tools [ 21 ] was achieved by the development of behavioral and organic models that are amenable to large-scale genomic analysis and manipulation. Non-human analysis such as this, in combination with imaging of the human brain, have contributed to a robust and increasingly predictive theoretical framework. There are two common but distinct dimensions of the term consciousness , [ 23 ] one involving arousal and states of consciousness and the other involving content of consciousness and conscious states . To be conscious of something, the brain must be in a relatively high state of arousal (sometimes called vigilance ), whether awake or in REM sleep . Brain arousal level fluctuates in a circadian rhythm but these natural cycles may be influenced by lack of sleep, alcohol and other drugs, physical exertion, etc. Arousal can be measured behaviorally by the signal amplitude required to trigger a given reaction (for example, the sound level that causes a subject to turn and look toward the source). High arousal states involve conscious states that feature specific perceptual content, planning and recollection or even fantasy. Clinicians use scoring systems such as the Glasgow Coma Scale to assess the level of arousal in patients with impaired states of consciousness such as the comatose state , the persistent vegetative state , and the minimally conscious state . Here, "state" refers to different amounts of externalized, physical consciousness: ranging from a total absence in coma, persistent vegetative state and general anesthesia , to a fluctuating, minimally conscious state, such as sleep walking and epileptic seizure. [ 24 ] Many nuclei with distinct chemical signatures in the thalamus , midbrain and pons must function for a subject to be in a sufficient state of brain arousal to experience anything at all. These nuclei therefore belong to the enabling factors for consciousness. Conversely it is likely that the specific content of any particular conscious sensation is mediated by particular neurons in the cortex and their associated satellite structures, including the amygdala , thalamus , claustrum and the basal ganglia . A variety of approaches have been proposed. Most are either dualist or monist . Dualism maintains a rigid distinction between the realms of mind and matter. Monism maintains that there is only one unifying reality as in neutral or substance or essence, in terms of which everything can be explained. Each of these categories contains numerous variants. The two main forms of dualism are substance dualism , which holds that the mind is formed of a distinct type of substance not governed by the laws of physics, and property dualism , which holds that mental properties involving conscious experience are fundamental properties, alongside the fundamental properties identified by a completed physics. The three main forms of monism are physicalism , which holds that the mind consists of matter organized in a particular way; idealism , which holds that only thought truly exists and matter is merely a representation of mental processes; and neutral monism , which holds that both mind and matter are aspects of a distinct essence that is itself identical to neither of them. Psychophysical parallelism is a third possible alternative regarding the relation between mind and body, between interaction (dualism) and one-sided action (monism). [ 25 ] Several philosophical perspectives that have sought to escape the problem by rejecting the mind–body dichotomy have been developed. The historical materialism of Karl Marx and subsequent writers, itself a form of physicalism, held that consciousness was engendered by the material contingencies of one's environment. [ 26 ] An explicit rejection of the dichotomy is found in French structuralism , and is a position that generally characterized post-war Continental philosophy . [ 27 ] An ancient model of the mind known as the Five-Aggregate Model, described in the Buddhist teachings, explains the mind as continuously changing sense impressions and mental phenomena. [ 28 ] Considering this model, it is possible to understand that it is the constantly changing sense impressions and mental phenomena (i.e., the mind) that experience/analyze all external phenomena in the world as well as all internal phenomena including the body anatomy, the nervous system as well as the organ brain. This conceptualization leads to two levels of analyses: (i) analyses conducted from a third-person perspective on how the brain works, and (ii) analyzing the moment-to-moment manifestation of an individual's mind-stream (analyses conducted from a first-person perspective). Considering the latter, the manifestation of the mind-stream is described as happening in every person all the time, even in a scientist who analyzes various phenomena in the world, including analyzing and hypothesizing about the organ brain. [ 28 ] Christian List argues that Benj Hellie's vertiginous question , i.e. why an individual exists as themselves and not as someone else, and the existence of first-personal facts, is evidence against physicalism . However, according to List, this is also evidence against other third-personal metaphysical pictures, including standard versions of dualism . [ 29 ] List also argues that the vertiginous question implies a "quadrilemma" for theories of consciousness. He claims that at most three of the following metaphysical claims can be true: 'first-person realism ', 'non- solipsism ', 'non-fragmentation', and 'one world' – and thus one of these four must be rejected. [ 30 ] List has proposed a model he calls the "many-worlds theory of consciousness" in order to reconcile the subjective nature of consciousness without lapsing into solipsism. [ 31 ] The following is a very brief account of some contributions to the mind–body problem. The viewpoint of interactionism suggests that the mind and body are two separate substances, but that each can affect the other. [ 32 ] This interaction between the mind and body was first put forward by the philosopher René Descartes . Descartes believed that the mind was non-physical and permeated the entire body, but that the mind and body interacted via the pineal gland . [ 33 ] [ 34 ] This theory has changed throughout the years, and in the 20th century its main adherents were the philosopher of science Karl Popper and the neurophysiologist John Carew Eccles . [ 35 ] [ 36 ] A more recent and popular version of Interactionism is the viewpoint of emergentism . [ 32 ] This perspective states that mental states are a result of the brain states, and that the mental events can then influence the brain, resulting in a two way communication between the mind and body. [ 32 ] The absence of an empirically identifiable meeting point between the non-physical mind (if there is such a thing) and its physical extension (if there is such a thing) has been raised as a criticism of interactionalist dualism. This criticism has led many modern philosophers of mind to maintain that the mind is not something separate from the body. [ 37 ] These approaches have been particularly influential in the sciences, particularly in the fields of sociobiology , computer science , evolutionary psychology , and the neurosciences . [ 38 ] [ 39 ] [ 40 ] [ 41 ] Avshalom Elitzur has defended interactionism and has described himself as a "reluctant dualist". One argument Elitzur makes in favor of dualism is an argument from bafflement. According to Elitzur, a conscious being can conceive of a P-zombie version of his/herself. However, a P-zombie cannot conceive of a version of itself that lacks corresponding qualia. [ 42 ] The viewpoint of epiphenomenalism suggests that the physical brain can cause mental events in the mind, but that the mind cannot interact with the brain at all; stating that mental occurrences are simply a side effect of the brain's processes. [ 32 ] This viewpoint explains that while one's body may react to them feeling joy, fear, or sadness, that the emotion does not cause the physical response. Rather, it explains that joy, fear, sadness, and all bodily reactions are caused by chemicals and their interaction with the body. [ 43 ] The viewpoint of psychophysical parallelism suggests that the mind and body are entirely independent from one another. Furthermore, this viewpoint states that both mental and physical stimuli and reactions are experienced simultaneously by both the mind and body, however, there is no interaction nor communication between the two. [ 32 ] [ 44 ] Double aspectism is an extension of psychophysical parallelism which also suggests that the mind and body cannot interact, nor can they be separated. [ 32 ] Baruch Spinoza and Gustav Fechner were two of the notable users of double aspectism, however, Fechner later expanded upon it to form the branch of psychophysics in an attempt to prove the relationship of the mind and body. [ 45 ] The viewpoint of pre-established harmony is another offshoot of psychophysical parallelism which suggests that mental events and bodily events are separate and distinct, but that they are both coordinated by an external agent: an example of such an agent could be God. [ 32 ] A notable adherent to the idea of pre-established harmony is Gottfried Wilhelm von Leibniz in his theory of Monadology. [ 46 ] His explanation of pre-established harmony relied heavily upon God as the external agent who coordinated the mental and bodily events of all things in the beginning. [ 47 ] Gottfried Wilhelm Leibniz 's theory of pre-established harmony ( French : harmonie préétablie ) is a philosophical theory about causation under which every " substance " affects only itself, but all the substances (both bodies and minds ) in the world nevertheless seem to causally interact with each other because they have been programmed by God in advance to "harmonize" with each other. Leibniz's term for these substances was " monads ", which he described in a popular work ( Monadology §7) as "windowless". The concept of pre-established harmony can be understood by considering an event with both seemingly mental and physical aspects. For example, consider saying 'ouch' after stubbing one's toe. There are two general ways to describe this event: in terms of mental events (where the conscious sensation of pain caused one to say 'ouch') and in terms of physical events (where neural firings in one's toe, carried to the brain, are what caused one to say 'ouch'). The main task of the mind–body problem is figuring out how these mental events (the feeling of pain) and physical events (the nerve firings) relate. Leibniz's pre-established harmony attempts to answer this puzzle, by saying that mental and physical events are not genuinely related in any causal sense, but only seem to interact due to psycho-physical fine-tuning. Leibniz's theory is best known as a solution to the mind–body problem of how mind can interact with the body. Leibniz rejected the idea of physical bodies affecting each other, and explained all physical causation in this way. Under pre-established harmony, the preprogramming of each mind must be extremely complex, since only it causes its own thoughts or actions, for as long as it exists. To appear to interact, each substance's "program" must contain a description of either the entire universe, or of how the object behaves at all times during all interactions that appear to occur. An example: Note that if a mind behaves as a windowless monad, there is no need for any other object to exist to create that mind's sense perceptions, leading to a solipsistic universe that consists only of that mind. Leibniz seems to admit this in his Discourse on Metaphysics , section 14. However, he claims that his principle of harmony, according to which God creates the best and most harmonious world possible, dictates that the perceptions (internal states) of each monad "expresses" the world in its entirety, and the world expressed by the monad actually exists. Although Leibniz says that each monad is "windowless", he also claims that it functions as a "mirror" of the entire created universe. On occasion, Leibniz styled himself as "the author of the system of pre-established harmony". [ 48 ] Immanuel Kant 's professor Martin Knutzen regarded pre-established harmony as "the pillow for the lazy mind". [ 49 ] In his sixth Metaphysical Meditation , Descartes talked about a "coordinated disposition of created things set up by God", shortly after having identified "nature in its general aspect" with God himself. His conception of the relationship between God and his normative nature actualized in the existing world recalls both the pre-established harmony of Leibniz and the Deus sive Natura of Baruch Spinoza . [ 50 ] The viewpoint of Occasionalism is another offshoot of psychophysical parallelism, however, the major difference is that the mind and body have some indirect interaction. Occasionalism suggests that the mind and body are separate and distinct, but that they interact through divine intervention. [ 32 ] Nicolas Malebranche was one of the main contributors to this idea, using it as a way to address his disagreements with Descartes' view of the mind–body problem. [ 51 ] In Malebranche's occasionalism, he viewed thoughts as a wish for the body to move, which was then fulfilled by God causing the body to act. [ 51 ] The problem was popularized by René Descartes in the 17th century , which resulted in Cartesian dualism , also by pre- Aristotelian philosophers, [ 52 ] [ 53 ] in Avicennian philosophy , [ 54 ] and in earlier Asian traditions . The Buddha (480–400 B.C.E), founder of Buddhism , described the mind and the body as depending on each other in a way that two sheaves of reeds were to stand leaning against one another [ 55 ] and taught that the world consists of mind and matter which work together, interdependently. Buddhist teachings describe the mind as manifesting from moment to moment, one thought moment at a time as a fast flowing stream. [ 28 ] The components that make up the mind are known as the five aggregates (i.e., material form, feelings, perception, volition, and sensory consciousness), which arise and pass away continuously. The arising and passing of these aggregates in the present moment is described as being influenced by five causal laws: biological laws, psychological laws, physical laws, volitional laws, and universal laws. [ 28 ] The Buddhist practice of mindfulness involves attending to this constantly changing mind-stream. Ultimately, the Buddha's philosophy is that both mind and forms are conditionally arising qualities of an ever-changing universe in which, when nirvāna is attained, all phenomenal experience ceases to exist. [ 56 ] According to the anattā doctrine of the Buddha, the conceptual self is a mere mental construct of an individual entity and is basically an impermanent illusion, sustained by form, sensation, perception, thought and consciousness. [ 57 ] The Buddha argued that mentally clinging to any views will result in delusion and stress, [ 58 ] since, according to the Buddha, a real self (conceptual self, being the basis of standpoints and views) cannot be found when the mind has clarity. Plato (429–347 B.C.E.) believed that the material world is a shadow of a higher reality that consists of concepts he called Forms. According to Plato, objects in our everyday world "participate in" these Forms, which confer identity and meaning to material objects. For example, a circle drawn in the sand would be a circle only because it participates in the concept of an ideal circle that exists somewhere in the world of Forms. He argued that, as the body is from the material world, the soul is from the world of Forms and is thus immortal. He believed the soul was temporarily united with the body and would only be separated at death, when it, if pure, would return to the world of Forms ; otherwise, reincarnation follows. Since the soul does not exist in time and space, as the body does, it can access universal truths. For Plato, ideas (or Forms) are the true reality, and are experienced by the soul. The body is for Plato empty in that it cannot access the abstract reality of the world; it can only experience shadows. This is determined by Plato's essentially rationalistic epistemology . [ 59 ] For Aristotle (384–322 BC) mind is a faculty of the soul . [ 60 ] [ 61 ] Regarding the soul, he said: It is not necessary to ask whether soul and body are one, just as it is not necessary to ask whether the wax and its shape are one, nor generally whether the matter of each thing and that of which it is the matter are one. For even if one and being are spoken of in several ways, what is properly so spoken of is the actuality. In the end, Aristotle saw the relation between soul and body as uncomplicated, in the same way that it is uncomplicated that a cubical shape is a property of a toy building block. The soul is a property exhibited by the body, one among many. Moreover, Aristotle proposed that when the body perishes, so does the soul, just as the shape of a building block disappears with destruction of the block. [ 62 ] Working in the Aristotelian-influenced tradition of Thomism , Thomas Aquinas (1225–1274), like Aristotle, believed that the mind and the body are one, like a seal and wax; therefore, it is pointless to ask whether or not they are one. However, (referring to "mind" as "the soul") he asserted that the soul persists after the death of the body in spite of their unity, calling the soul "this particular thing". Since his view was primarily theological rather than philosophical, it is impossible to fit it neatly within either the category of physicalism or dualism . [ 63 ] In religious philosophy of Eastern monotheism , dualism denotes a binary opposition of an idea that contains two essential parts. The first formal concept of a "mind–body" split may be found in the divinity – secularity dualism of the ancient Persian religion of Zoroastrianism around the mid-fifth century BC. Gnosticism is a modern name for a variety of ancient dualistic ideas inspired by Judaism popular in the first and second century AD. These ideas later seem to have been incorporated into Galen 's "tripartite soul" [ 64 ] that led into both the Christian sentiments [ 65 ] expressed in the later Augustinian theodicy and Avicenna 's Platonism in Islamic Philosophy . René Descartes (1596–1650) believed that mind exerted control over the brain via the pineal gland : My view is that this gland is the principal seat of the soul, and the place in which all our thoughts are formed. [ 66 ] [The] mechanism of our body is so constructed that simply by this gland's being moved in any way by the soul or by any other cause, it drives the surrounding spirits towards the pores of the brain, which direct them through the nerves to the muscles; and in this way the gland makes the spirits move the limbs. [ 67 ] His posited relation between mind and body is called Cartesian dualism or substance dualism . He held that mind was distinct from matter , but could influence matter. How such an interaction could be exerted remains a contentious issue. For Immanuel Kant (1724–1804) beyond mind and matter there exists a world of a priori forms, which are seen as necessary preconditions for understanding. Some of these forms, space and time being examples, today seem to be pre-programmed in the brain. ...whatever it is that impinges on us from the mind-independent world does not come located in a spatial or a temporal matrix,...The mind has two pure forms of intuition built into it to allow it to... organize this 'manifold of raw intuition'. [ 68 ] Kant views the mind–body interaction as taking place through forces that may be of different kinds for mind and body. [ 69 ] For Thomas Henry Huxley (1825–1895) the conscious mind was a by-product of the brain that has no influence upon the brain, a so-called epiphenomenon . On the epiphenomenalist view, mental events play no causal role. Huxley, who held the view, compared mental events to a steam whistle that contributes nothing to the work of a locomotive. [ 70 ] Alfred North Whitehead advocated a sophisticated form of panpsychism that has been called by David Ray Griffin panexperientialism . [ 71 ] For Karl Popper (1902–1994) there are three aspects of the mind–body problem: the worlds of matter, mind, and of the creations of the mind, such as mathematics . In his view, the third-world creations of the mind could be interpreted by the second-world mind and used to affect the first-world of matter. An example might be radio , an example of the interpretation of the third-world (Maxwell's electromagnetic theory ) by the second-world mind to suggest modifications of the external first world. The body–mind problem is the question of whether and how our thought processes in World 2 are bound up with brain events in World 1. ...I would argue that the first and oldest of these attempted solutions is the only one that deserves to be taken seriously [namely]: World 2 and World 1 interact, so that when someone reads a book or listens to a lecture, brain events occur that act upon the World 2 of the reader's or listener's thoughts; and conversely, when a mathematician follows a proof, his World 2 acts upon his brain and thus upon World 1. This, then, is the thesis of body–mind interaction. [ 72 ] With his 1949 book, The Concept of Mind , Gilbert Ryle "was seen to have put the final nail in the coffin of Cartesian dualism". [ 73 ] In the chapter "Descartes' Myth," Ryle introduces "the dogma of the Ghost in the machine " to describe the philosophical concept of the mind as an entity separate from the body: I hope to prove that it is entirely false, and false not in detail but in principle. It is not merely an assemblage of particular mistakes. It is one big mistake and a mistake of a special kind. It is, namely, a category mistake. For John Searle (b. 1932) the mind–body problem is a false dichotomy ; that is, mind is a perfectly ordinary aspect of the brain. Searle proposed biological naturalism in 1980. According to Searle then, there is no more a mind–body problem than there is a macro–micro economics problem. They are different levels of description of the same set of phenomena. [...] But Searle is careful to maintain that the mental – the domain of qualitative experience and understanding – is autonomous and has no counterpart on the microlevel; any redescription of these macroscopic features amounts to a kind of evisceration, ... [ 74 ] Aristotle seems to say that the nous is a form, but on closer inspection we find that it is not, or at least not the usual kind. Nous is a maker of forms. A “form of forms” is like a tool of tools, like a living body's organ that makes tools. Nous is certainly not itself the sort of form that it makes. The hand is not a made tool (it would have to be made by yet another hand). In Greek “tool” and “organ” are the same word. So we see: ”In the phrase “tool of tools” the first use of the word stands for a living organ, the second for an artificially made tool. In II-4 he says “all natural bodies are tools (organs) of the soul's,” (both as food and as material from which to make tools). In English we would say that the hand is the organ of tools.
https://en.wikipedia.org/wiki/Mind–body_problem
The Mine Safety and Health Administration ( MSHA ) ( / ˈ ɛ m ʃ ə / ) is a small agency of the United States Department of Labor which administers the provisions of the Federal Mine Safety and Health Act of 1977 (Mine Act) to enforce compliance with mandatory safety and health standards as a means to eliminate fatal accidents , to reduce the frequency and severity of nonfatal accidents, to minimize health hazards, and to promote improved safety and health conditions in the nation's mines. [ 3 ] MSHA carries out the mandates of the Mine Act at all mining and mineral processing operations in the United States, regardless of size, number of employees, commodity mined, or method of extraction. David Zatezalo was sworn in as Assistant Secretary of Labor for Mine Safety and Health, and head of MSHA, on November 30, 2017. He served until January 20, 2021. Jeannette Galanais served as Acting Assistant Secretary by President Joe Biden on February 1, 2021 until Christopher Williamson took office on April 11, 2022. [ 4 ] MSHA is organized into several divisions. [ 5 ] The Coal Mine Safety and Health division is divided into 12 districts covering coal mining in different portions of the United States. The Metal-Nonmetal Mine Safety and Health division covers six regions of the United States. In 1891, Congress passed the first federal statute governing mine safety. The 1891 law was relatively modest legislation that applied only to mines in U.S. territories , and, among other things, established minimum ventilation requirements at underground coal mines and prohibited operators from employing children under 12 years of age. [ 6 ] In 1910, Congress established the Bureau of Mines as a new agency in the Department of the Interior . [ 7 ] The Bureau was charged with the responsibility to conduct research and to reduce accidents in the coal mining industry, but was given no inspection authority until 1941, when Congress empowered federal inspectors to enter mines. [ 8 ] In 1947, Congress authorized the formulation of the first code of federal regulations for mine safety. [ 9 ] The Federal Coal Mine Safety Act of 1952 provided for annual inspections in certain underground coal mines, and gave the Bureau limited enforcement authority, including power to issue violation notices and imminent danger withdrawal orders. [ 10 ] The 1952 Act also authorized the assessment of civil penalties against mine operators for noncompliance with withdrawal orders or for refusing to give inspectors access to mine property, although no provision was made for monetary penalties for noncompliance with the safety provisions. In 1966, Congress extended coverage of the 1952 Coal Act to all underground coal mines. [ 11 ] The first federal statute directly regulating non-coal mines did not appear until the passage of the Federal Metal and Nonmetallic Mine Safety Act of 1966. [ 12 ] The 1966 Act provided for the promulgation of standards, many of which were advisory, and for inspections and investigations; however, its enforcement authority was minimal. The Coal Mine Safety and Health Act of 1969 , generally referred to as the Coal Act, was more comprehensive and more stringent than any previous federal legislation governing the mining industry. [ 13 ] The Coal Act included surface as well as underground coal mines within its scope, required two annual inspections of every surface coal mine and four at every underground coal mine, and dramatically increased federal enforcement powers in coal mines. The Coal Act also required monetary penalties for all violations, and established criminal penalties for knowing and willful violations. The safety standards for all coal mines were strengthened, and health standards were adopted. The Coal Act included specific procedures for the development of improved mandatory health and safety standards, and provided compensation for miners who were totally and permanently disabled by the progressive respiratory disease caused by the inhalation of fine coal dust pneumoconiosis or "black lung". In 1973, the Secretary of the Interior , through administrative action, created the Mining Enforcement and Safety Administration (MESA) as a new departmental agency separate from the Bureau of Mines. MESA assumed the safety and health enforcement functions formerly carried out by the Bureau to avoid any appearance of a conflict of interest between the enforcement of mine safety and health standards and the Bureau's responsibilities for mineral resource development. (MESA was the predecessor organization to MSHA, prior to March 9, 1978.) More recently, Congress passed the Federal Mine Safety and Health Act of 1977, the legislation which currently governs MSHA's activities. [ 14 ] The Mine Act amended the 1969 Coal Act in a number of significant ways, and consolidated all federal health and safety regulations of the mining industry, coal as well as non-coal mining, under a single statutory scheme. The Mine Act strengthened and expanded the rights of miners, and enhanced the protection of miners from retaliation for exercising such rights. Mining fatalities dropped sharply under the Mine Act from 272 in 1977 to 45 in 2014. [ 15 ] The Mine Act also transferred responsibility for carrying out its mandates from the Department of the Interior to the Department of Labor, and created MSHA, which is a large independent agency that functions in business oversight. [ 16 ] : 12 Additionally, the Mine Act established the independent Federal Mine Safety and Health Review Commission to provide for independent review of the majority of MSHA's enforcement actions. Congress passed the Mine Improvement and New Emergency Response Act (MINER Act) in 2006. [ 17 ] The MINER Act amended the Mine Act to require mine-specific emergency response plans in underground coal mines; added new regulations regarding mine rescue teams and sealing of abandoned areas; required prompt notification of mine accidents; and enhanced civil penalties. [ 18 ] [ 19 ] MSHA inherited regulatory power of respirators from the Bureau of Mines , which was shared with NIOSH until the passage of 42 CFR Part 84, which withdrew MSHA's involvement from the approval process of respirators. [ 20 ] Modern mining regulation in the United States is carried out by MSHA and governed by The Federal Mine Safety and Health Act of 1977 and MSHA's Program Policy Manual Volume III . On January 27, 2012 as required under the Dodd–Frank Wall Street Reform and Consumer Protection Act , the Securities and Exchange Commission adopted final rules which require covered SEC-reporting issuers that are "operator(s)" (or that has a subsidiary that is an "operator") of a "coal or other mine" to disclose certain mine safety violations, citations and orders and related matters for each coal or other mine that they operate. [ 21 ] Covered mine "operator(s)" are also required to file a current report on Form 8-K to disclose the receipt of certain orders and notices from the U.S. Labor Department's Mine Safety and Health Administration (MSHA) related to a coal or other mine that they operate. [ relevant? ] Mine operators are required by law to report all mining accidents within 15 minutes of when the operator knew or should have known about the accident. [ 22 ] Immediately reportable accidents and injuries are: Statistical analyses performed by MSHA show that between 1990 and 2004, the industry cut the rate of injuries (a measure comparing the rate of incidents to overall number of employees or hours worked) by more than half and fatalities by two-thirds following three prior decades of steady improvement. [ 24 ] MSHA employs nearly one safety inspector for every four coal mines. Underground coal mines are thoroughly inspected at least four times annually by MSHA inspectors. In addition, miners can report violations, and request additional inspections. Miners with such concerns for their work safety cannot be penalized with any threat to the loss of employment. Additionally, the Mine Safety and Health Act authorizes the National Institute for Occupational Safety and Health (NIOSH), part of the Centers for Disease Control and Prevention under the U.S. Department of Health and Human Services to develop recommendations for mine health standards for the Mine Safety and Health Administration; administer a medical surveillance program for miners, including chest X-rays to detect pneumoconiosis (black lung disease) in coal miners; conduct on-site investigations in mines; and test and certify personal protective equipment and hazard-measurement instruments. [ 25 ]
https://en.wikipedia.org/wiki/Mine_Safety_and_Health_Administration
A mine railway (or mine railroad , U.S.), sometimes pit railway , is a railway constructed to carry materials and workers in and out of a mine . [ 1 ] Materials transported typically include ore , coal and overburden (also called variously spoils, waste, slack, culm, [ 2 ] and tilings; all meaning waste rock). It is little remembered, but the mix of heavy and bulky materials which had to be hauled into and out of mines gave rise to the first several generations of railways , at first made of wooden rails, but eventually adding protective iron, steam locomotion by fixed engines and the earliest commercial steam locomotives , all in and around the works around mines. [ 3 ] Wagonways (or tramways) were developed in Germany in the 1550s to facilitate the transport of ore tubs to and from mines, using primitive wooden rails. Such an operation was illustrated in 1556 by Georgius Agricola of Germany (Image right). [ 4 ] This used "Hund" carts with unflanged wheels running on wooden planks and a vertical pin on the truck fitting into the gap between the planks, to keep it going the right way. [ 5 ] Such a transport system was used by German miners at Caldbeck , Cumbria , England, perhaps from the 1560s. [ 6 ] An alternative explanation derives it from the Magyar hintó – a carriage. There are possible references to their use in central Europe in the 15th century. [ 7 ] A funicular railway was made at Broseley in Shropshire , England at some time before 1605. This carried coal for James Clifford from his mines down to the River Severn to be loaded onto barges and carried to riverside towns. [ 8 ] Though the first documentary record of this is later, its construction probably preceded the Wollaton Wagonway , completed in 1604, hitherto regarded as the earliest British installation. This ran from Strelley to Wollaton near Nottingham . Another early wagonway is noted onwards. Huntingdon Beaumont , who was concerned with mining at Strelley , also laid down broad wooden rails near Newcastle upon Tyne , on which a single horse could haul fifty to sixty bushels (130–150 kg) of coal. [ 9 ] By the 18th century, such wagonways and tramways existed in a number of areas. Ralph Allen, for example, constructed a tramway to transport stone from a local quarry to supply the needs of the builders of the Georgian terraces of Bath . The Battle of Prestonpans , in the Jacobite rising of 1745 , was fought astride the 1722 Tranent – Cockenzie Waggonway. [ 10 ] This type of transport spread rapidly through the whole Tyneside coalfield, and the greatest number of lines were to be found in the coalfield near Newcastle upon Tyne . They were mostly used to transport coal in chaldron wagons from the coalpits to a staithe (a wooden pier) on the river bank, whence coal could be shipped to London by collier brigs . The wagonways were engineered so that trains of coal wagons could descend to the staithe by gravity, being braked by a brakesman who would "sprag" the wheels by jamming them. Wagonways on less steep gradients could be retarded by allowing the wheels to bind on curves. As the work became more wearing on the horses, a vehicle known as a dandy wagon was introduced, in which the horse could rest on downhill stretches. A tendency to concentrate employees started when Benjamin Huntsman , looking for higher quality clock springs, found in 1740 [ 11 ] that he could produce high quality steel in unprecedented quantities ( crucible steel to replace blister steel ) in using ceramic crucibles in the same fuel shortage/glass industry inspired reverbatory furnaces that were spurring the coal mining, coking , cast-iron cannon foundries, and the much in demand gateway or stimulus products [ 11 ] of the glass making industries. These technologies, for several decades, had already begun gradually quickening industrial growth and causing early concentrations of workers so that there were occasional early small factories that came into being. [ 11 ] This trend concentrating effort into bigger central located but larger enterprises [ 11 ] turned into a trend spurred by Henry Cort 's iron processing patent of 1784 [ 11 ] leading in short order to foundries collocating near coal mines [ 3 ] and accelerating the practice of supplanting the nations cottage industries. [ 11 ] With that concentration of employees and separation from dwellings, [ 3 ] horsedrawn trams became commonly available as a commuter resource for the daily commute to work. [ 3 ] Mine railways were used from 1804 around Coalbrookdale in such industrial concentrations of mines and iron works, all demanding traction-drawing of bulky or heavy loads. These gave rise to extensive early wooden rail ways and initial animal-powered trains of vehicles, [ 11 ] then successively in just two decades [ 3 ] to protective iron strips nailed to protect the rails, to steam drawn trains (1804), and to cast-iron rails. Later, George Stephenson , inventor of the world-famous Rocket and a board member of a mine, convinced his board to use steam for traction. [ 12 ] Next, he petitioned Parliament to license a public passenger railway, [ 3 ] founding the Liverpool and Manchester Railway . Soon after the intense public publicity, in part generated by the contest to find the best locomotive won by Stephenson's Rocket, railways underwent explosive growth worldwide, and the industrial revolution gradually went global. [ 3 ] There is usually no direct connection from a mine railway to the mine's industrial siding or the public railway network, because of the narrow-gauge track that is normally employed. In the United States, the standard gauge for mine haulage is 3 ft 6 in ( 1,067 mm ), although gauges from 18 in ( 457 mm ) to 5 ft 6 in ( 1,676 mm ) are used. [ 13 ] [ 14 ] Original mine railways used wax-impregnated wooden rails attached to wooden sleepers , on which drams were dragged by men, children or animals. This was later replaced by L-shaped iron rails, which were attached to the mine floor, meaning that no sleepers were required and hence leaving easy access for the feet of children or animals to propel more drams. These early mine railways used wooden rails, which in the early industrial revolution about Coalbrookdale , were soon capped with iron strapping, those were replaced by wrought iron, then with the first steam traction engines, cast-iron rails, [ 12 ] and eventually steel rails as each was in succession found to last much longer than the previous cheaper rail type. [ 3 ] By the time of the first steam locomotive-drawn trains, most rails laid were of wrought iron [ 3 ] which was outlasting cast-iron rails by 8:1. About three decades later, after Andrew Carnegie had made steel competitively cheap, steel rails were supplanting iron for the same longevity reasons. [ 3 ] The tram (or dram ) cars used for mine haulage are generally called tubs . [ 15 ] The term mine car is commonly used in the United States [ 16 ] Mine workers have often been used to push mine carts. In the very cramped conditions of hand-hewn mining tunnels, children were also often used before the advent of child labour legislation, either pushing the carts themselves or tending to animals that did (see below). [ 17 ] The Romans were the first to realise the benefits of using animals in their industrial workings, using specially bred pit ponies to power supplementary work such as mine pumps. Ponies began to be used underground, often replacing child or female labour, as distances from pit head to coal face became greater. The first known recorded use in Britain was in the County Durham coalfield in 1750; in the United States, mules were the dominant source of animal power in the mine industry, with horses and ponies used to a lesser extent. [ 18 ] At the peak in 1913, there were 70,000 ponies underground in Britain. In later years, mechanical haulage was quickly introduced on the main underground roads replacing the pony hauls and ponies tended to be confined to the shorter runs from coal face to main road (known in North East England as "putting", in the United States as "tramming" or "gathering" [ 19 ] ) which were more difficult to mechanise. As of 1984, 55 ponies were still at use with the National Coal Board in Britain, chiefly at the modern pit in Ellington, Northumberland . Dandy wagons were often attached to trains of full drams, to contain a horse or pony. Mining and later railway engineers designed their tramways so that full (heavy) trains would use gravity down the slope, while horses would be used to pull the empty drams back to the workings. The Dandy wagon allowed for easy transportation of the required horse each time. Probably the last colliery horse to work underground in a British coal mine, Robbie , was retired from Pant y Gasseg, near Pontypool , in May 1999. [ 20 ] In the 19th century after the mid-1840s, when the German invention of wire rope became available from manufactories in both Europe and North America, large stationary steam engines on the surface with cables reaching underground were commonly used for mine haulage. Unsurprisingly, the innovation-minded managers of the Lehigh Coal & Navigation Company pioneered the technology in America using it to allow the dead-lift of loaded coal consists 1,100 feet (340 m) up the Ashley Planes , and the augmentation of their works in and above the Panther Creek Valley [ 21 ] with new gravity switchback sections and return cable inclines, but most notably by installing two cable lift sections and expanding the already famous Mauch Chunk Switchback Railway with a 'back track' dropping car return time from 3–4 hours to about 20 minutes, which the new inclines then fed from new mine shafts and coal breakers farther down into the valley. [ 22 ] Sometimes, stationary engines were even located underground, with the boiler on the surface, though that was a minority situation. All of the cable haulage methods were primarily used on the main haulage ways of the mine. Typically, manual labor, mules or pit ponies were used in gathering filled cars from the working areas (galleries were driven across seams as much as possible) to main haulage ways. [ 23 ] In the first decade of the 20th century, electric locomotives were displacing animal power for this secondary haulage role in mines [ 24 ] where sparking triggered explosive methane buildup was a lesser danger. Several cable haulage systems were used: In slope mines , where there was a continuous downgrade from the entrance to the working face, the rope from the hoisting engine could be used to lower empty cars into the mine and then raise full cars. In shaft mines , secondary hoisting engines could be used to pull cars on grades within the mine. For grades of a few percent, trains of 25 cars each carrying roughly half a ton were typical in the 1880s. [ 25 ] In mines where grades were not uniform or where the grades were not steep enough for gravity to pull a train into the mine, the main hoisting rope could be augmented with a tail rope connected to the opposite end of the train of mine cars. The tail-rope system had its origins on cable-hauled surface inclines prior to the 1830s. [ 26 ] This was the dominant system in the 1880s [ 27 ] Frequently, one engine was used to work both ropes, with the tail rope reaching into the mine, around a pulley at the far end, and then out again. Finally, the most advanced systems involved continuous loops of rope operated like a cable car system. Some mines used endless chains before wire-rope became widely available. [ 28 ] The endless chain system originated in the mines near Burnley (England) around 1845. An endless rope system was developed in Nottinghamshire around 1864, and another independently developed near Wigan somewhat later (also in England). [ 29 ] In these systems, individual cars or trains within the mine could be connected to the cable by a grip comparable to the grips used on surface cable car systems. [ 30 ] In some mines, the haulage chain or cable went over the top of the cars, and cars were released automatically when the chain or cable was lifted away by an overhead pulley. Where the cable ran under the cars, a handheld grip could be used, where the grip operator would ride on the front car of the train working the grip chained to the front of the car. In some cases, a separate grip car was coupled to the head of the train. [ 31 ] At the dawn of the 20th century, endless rope haulage was the dominant haulage technology for the main haulage ways of underground mines. [ 24 ] For as long as it was economical to operate steam locomotives on the general railway system, steam locomotives were also used on the surface trackage of mines. In the 19th and early 20th centuries, some large mines routinely used steam locomotives underground. Locomotives for this purpose were typically very squat tank engines with an 0-4-0 wheel arrangement. Use of steam power underground was only practical in areas with very high exhaust airflow, with engine speed limits of 1/2 the air velocity to assure adequate clean air for the crew on outbound trips. Such engines could not be used in mines with firedamp problems. [ 32 ] Porter, Bell & Co. appears to have built the first underground mining locomotives used in the United States around 1870. By 1874, the Consolidation Coal Company and Georges Creek Coal and Iron Company were using several Porter locomotives in their underground mines in the Georges Creek Valley of Maryland . Other users included several coal mines near Pittsburgh, Pennsylvania , the Lehigh Coal and Navigation Company and an iron mine in the Lake Superior Iron Ranges . Porter's mine locomotives required a minimum 5-foot clearance and 4-foot width when operating on 3-foot gauge track, where they could handle a 20-foot radius curve. [ 33 ] [ 34 ] The Baldwin Locomotive Works built similar locomotives, starting in 1870. [ 35 ] [ 36 ] By the early 20th century, very small British-made oil-fired steam locomotives were in use in some South African mines. [ 37 ] Porter and Vulcan (Wilkes-Barre) advertised steam mine locomotives in 1909 and 1911. [ 38 ] [ 39 ] By the early 1920s, only a few small mines in the Pocahontas Coalfield in West Virginia were using steam locomotives underground. [ 40 ] Nonetheless, both Baldwin and Vulcan continued to advertise steam locomotives for underground use outside the coal industry as late as 1921. [ 41 ] Compressed-air locomotives were powered by compressed air carried on the locomotive in compressed-air containers. This method of propulsion had the advantage of being safe but the disadvantage of high operating costs due to very limited range before it was necessary to recharge the air tanks. Generally, compressors on the surface were connected by plumbing to recharge stations located throughout the mine. Recharging was generally very fast. Narrow gauge compressed air locomotives were manufactured for mines in Germany as early as 1875, with tanks pressurized to 4 or 5 bar . [ 42 ] The Baldwin Locomotive Works delivered their first compressed air locomotive in 1877, and by 1904, they offered a variety of models, most with an 0-4-0 wheel arrangement. [ 43 ] Compressed air locomotives were introduced in the Newbottle Collieries in Scotland in 1878, operating at 200 psi (14 bar ). [ 44 ] Ordinary mine compressed-air systems operating at 100 psi (7 bar) only allowed a few hundred feet of travel. By the late 1880s, Porter was building locomotives designed for 500 to 600 psi (34-41 bar ). [ 45 ] By the early 1900s, locomotive air tank pressures had increased to from 600 to 800 psi (41-55 bar), although pressures up to 2000 psi (140 bar) were already envisioned. [ 43 ] In 1911, Vulcan (Wilkes-Barre) was selling single-tank compressed-air locomotives operating at 800 psi (55 bar), double-tank models up to 1000 psi (69 bar) and one 6-tank model that may have operated at a much higher pressure. [ 46 ] The Homestake in South Dakota, USA used such high pressures, with special compressors and distribution piping. Except for very small prospects and remote small mines, battery or diesel locomotives have replaced compressed air. The electric motor technology used pre-1900 to DC with a few hundred volts and a direct supply of power to the motor from the overhead wire enabled the use of efficient, small and sturdy tractors of simple construction. Initially, there was no voltage standard, but by 1914, 250 volts was the standard voltage for underground work in the United States. This relatively low voltage was adopted for safety's sake. [ 47 ] The first electric mine railway in the world was developed by Siemens & Halske for bituminous coal mining in Saxon Zauckerode near Dresden (now Freital) and was being worked as early as 1882 on the 5th main cross-passage of the Oppel Shaft run by the Royal Saxon Coal Works. [ 48 ] In 1894, the mine railway of the Aachen smelting company, Rothe Erde , was electrically driven, as were subsequently numerous other mine railways in the Rhineland , Saarland Lorraine , Luxembourg and Belgian Wallonia . There were large scale deliveries of electric locomotives for these railways from AEG , Siemens & Halske , Siemens-Schuckert Works (SSW) and the Union Electricitäts-Gesellschaft (UEG) in these countries. The first electric mine locomotive in the United States went into service in mid 1887 in the Lykens Valley Coal Company mine in Lykens, Pennsylvania . The 35 hp motor for this locomotive was built by the Union Electric Company of Philadelphia . [ 49 ] The 15000 pound (6800 kg) locomotive was named the Pioneer, and by mid 1888, a second electric locomotive was in service at that mine. [ 50 ] [ 51 ] [ 52 ] Use in the Appalachian coal fields spread rapidly. By 1903, there were over 600 electric mine locomotives in use in America with new ones being produced at a rate of 100 per year. [ 53 ] Initially, electric locomotives were used only where it was economical to string overhead line for power. This limited their usage for gathering loads at the mine face, where trackage was temporary and frequently relocated. This motivated the development of battery locomotives, but in the first decade of the 20th century the first successful electric gathering locomotives used cable reels . To run on tracks away from overhead lines, the power cable was clipped to the overhead line and then automatically unreeled as the locomotive advanced and reeled up as the locomotive returned. [ 54 ] [ 55 ] [ 56 ] Crab locomotives were equipped with a winch for pulling cars out of the un-powered tracks. This approach allowed use of temporary track that was too light to carry the weight of the a cable-reel or battery locomotive. The disadvantage of a crab locomotive was that someone had to pull the haulage cable from the winch to the working face, threading it over pulleys at any sharp turns. [ 57 ] [ 58 ] Explosion-proof mining locomotives from Schalker Eisenhütte are used in all the mines owned by Ruhrkohle (today Deutsche Steinkohle ). The Gasmotorenfabrik Deutz (Deutz Gas Engine Company), now Deutz AG , introduced a single-cylinder benzine locomotive for use in mines in 1897. Their first mining locomotives were rated at 6 to 8 hp (4.5 to 6.0 kW) and weighed 5,280 pounds (2,390 kg). [ 59 ] The original 6 hp (4.5 kW) engine was 8 feet 6.5 inches (2.60 m) long, 3 feet 11 inches (1.19 m) wide and 4 feet 3.5 inches (1.31 m) high and weighed 2.2 long tons (2.46 short tons; 2.24 t). [ 60 ] Typical Deutz mine engines in 1906 were rated at 8 to 12 hp (6.0 to 8.9 kW). [ 61 ] By this time, double-cylinder 18 hp (13 kW). engines built by Wolseley Motors were being used in South African mines. [ 62 ] By 1914, Whitcomb Locomotive Works , Vulcan Iron Works , and Milwaukee Locomotive Manufacturing Co. (later merged with Whitcomb) were making gasoline mining locomotives in the United States with 4 and 6 cylinder engines . [ 63 ] Late 19th and early 20th century mine railway locomotives were operated with petrol benzene and alcohol / benzene mixtures. [ 64 ] Although such engines were initially used in metal mines, they were in routine use in coal mines by 1910. Firedamp safety was achieved by wire gauze shields over intake and exhaust ports as well as cooling water injection in the exhaust system. Bubbling the exhaust through a water bath also greatly reduced noxious fumes. [ 63 ] [ 65 ] For safety (noxious fumes as well as flammability of the fuel) modern mine railway internal combustion locomotives are only operated using diesel fuel. Catalytic scrubbers reduce carbon monoxide. Other locomotives are electric, either battery or trolley. Battery powered locomotives and systems solved many of the potential problems that combustion engines present, especially regarding fumes, ventilation and heat generation. Compared to simple electric locomotives, battery locomotives do not need trolley wire strung over each track. However, batteries are heavy items which used to require long periods of charge to produce relatively short periods of full-power operation, resulting in either restricted operations or the need for the doubling-up of equipment purchasing. In the 19th century, there was considerable speculation about the potential use of battery locomotives in mines. [ 66 ] [ 67 ] [ 68 ] By 1899, Baldwin-Westinghouse had delivered an experimental battery locomotive to a Virginia mine; battery recharging occurred whenever the locomotive was running under trolley wire , while it could run from battery when working on temporary trackage near the face . This locomotive was eventually successful, but only after the voltage on the trolley system was stabilized. [ 69 ] A Siemens and Haske pure storage battery locomotive was in use in a coal mine in Gelsenkirchen (Germany) by 1904. [ 70 ] One problem with battery locomotives was battery replacement. This was simplified by use of removable battery boxes. Eventually, battery boxes were developed that included wheels so that they could be rolled off of the locomotive. [ 71 ] While the initial motivation had to do with battery maintenance, the primary use for this idea was at charging stations where a discharged battery box could be rolled off and replaced with a freshly charged box. [ 72 ] While popular, battery systems were often practically restricted to mines where systems were short, and moving relatively low-density ore which could explode easily. Today, heavy-duty batteries provide full-shift (8 hours) operations with one or more spare batteries charging. Until 1995 the largest single, narrow gauge, above-ground, mine and coal railway network in Europe was in the Leipzig-Altenburg lignite field in Germany. It had 726 kilometres (451 mi) of 900 mm ( 2 ft 11 + 7 ⁄ 16 in ) – the largest 900 mm ( 2 ft 11 + 7 ⁄ 16 in ) network in existence. Of this, about 215 kilometres was removable track inside the actual pits and 511 kilometres was fixed track for the transportation of coal to the main rail network. The last 900 mm ( 2 ft 11 + 7 ⁄ 16 in ) gauge mine railway in the German state of Saxony , a major mining area in central Europe, was closed in 1999 at the Zwenkau Mine in Leipzig. Once a very extensive railway network, towards the end it only had 70 kilometres (43 mi) of movable 900 mm ( 2 ft 11 + 7 ⁄ 16 in ) track and 90 kilometres (56 mi) of 900 mm ( 2 ft 11 + 7 ⁄ 16 in ) fixed railway track within the Zwenkau open cast mine site itself, as well as a 20 kilometres (12 mi), standard gauge , link railway for the coal trains to the power stations (1995–1999). The closure of this mine marked the end of the history of 900 mm ( 2 ft 11 + 7 ⁄ 16 in ) mine railways in the lignite mines of Saxony. In December 1999, the last 900 mm ( 2 ft 11 + 7 ⁄ 16 in ) railway in the Central German coal mining field in Lusatia was closed. In the United States, Consol Energy 's Shoemaker Mine, covering a large area east of Benwood, West Virginia was the last underground coal mine to use rail haulage. Starting in 2006, 12 miles of underground conveyor belt and 2.5 miles of above ground conveyor belt were installed. The last load of coal was hauled by rail in January 2010. [ 73 ] A remnant of the coal railways in the Leipzig-Altenburg Lignite Field may be visited and operated as a museum railway. Regular museum trains also run on the line from Meuselwitz via Haselbach to Regis-Breitingen .
https://en.wikipedia.org/wiki/Mine_railway
Mine surveying is the practice of determining the relative positions of points on or beneath the surface of the earth by direct or indirect measurements of distance , direction & elevation . [ 1 ] This article about mining is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mine_survey
A minecart , mine cart , or mine car (or more rarely mine trolley or mine hutch ), is a type of rolling stock found on a mine railway , used for transporting ore and materials procured in the process of traditional mining . Minecarts are seldom used in modern operations, having largely been superseded in underground operations (especially coal mines ) by more efficient belt conveyor systems that allow machines such as longwall shearers and continuous miners to operate at their full capacity, and above ground by large dumpers . Throughout the world, there are different titles for mine carts. In South Africa , a minecart is referred to as a cocopan ; [ 1 ] or koekepan . In German , it is called Hunt (alternative spelling Hund ). In Wales , minecarts are known as drams . [ 2 ] In the U.S. and elsewhere, the term skip – or skip wagon (older spelling: waggon ) – is used. [ 3 ] (See: Skip (container)#Etymology ) In particular, a V skip wagon is a side-tipping skid with a V-shaped body ( Images ) Minecarts range in size and usage, and are usually made of steel for hauling ore. Shaped like large, rectangular buckets, minecarts ride on metal tracks and were originally pushed or pulled by men and animals (supplemented later by rope-haulage systems). They were generally introduced in early modern time, replacing containers carried by men. Originally, they didn't run on a real "rail", where the wheels would have a rim to fit into the tracks, but with plain wheels on a wooden plank way, hold in track by a pin fitting into a guide groove, or by the underside of the cart itself which was lower than the wheels and fitted between the planks ("Hungarian system"). [ 4 ] [ circular reference ] As mines increased in size and output, the aforementioned methods became impractical because of the distances and quantities of material involved, so larger carts would be used, hauled by narrow gauge diesel and electric locomotives (in coal mining operations, where gas that is flammable would present a problem, the locomotives would be flameproof or battery powered ). These were also used to pull trains transporting miners to the workfaces. Minecarts were very important in the history of technology because they evolved into railroad cars . See History of rail transport . An open railroad car (gondola) with a tipping trough, often found in mines . Known in the UK as a tippler or chaldron wagon [ 5 ] and in the US as a mine car . [ 6 ] Minecarts have been depicted as a type of thrill ride; for instance Indiana Jones uses one in an escape scene in Indiana Jones and the Temple of Doom . Mine train roller coasters are inspired by minecarts. Minecart levels, a term used for levels in which the player takes a high-speed ride in a minecart, are common in video games , especially side-scrolling video games such as Donkey Kong Country [ 7 ] and Fantastic Dizzy . A minecart is also featured in two scenes of the 2005 animated film Hoodwinked! . Minecarts and tracks can be crafted by the player in Minecraft and used for transportation. They are also found in abandoned mineshafts that generate naturally as a part of the game's procedural generation . Minecarts in Stardew Valley are also used for transportation between the farm, mines and a few other destinations but are not available from the beginning and are unlocked by completing one part of the community center. In Sun Haven , a minecart is used for players to traverse between floors in the mines while other farm sims use lifts. In Great Britain , restored mine carts (known as "tubs") containing floral displays can commonly be seen on village greens and outside pubs in former coal mining areas such as Northumberland and County Durham . Like in Great Britain, old mine carts are common decorations in Germany, sometimes accompanied by old mining tools. Especially in the Ruhr Area those carts can be found in many front yards.
https://en.wikipedia.org/wiki/Minecart
Minentaucher is the German term for mine clearance divers . The Minentaucherkompanie is a specialist unit within the German Navy responsible for underwater and Land tasks including removing or salvaging underwater munitions such as mines and for servicing underwater drones . [ 1 ] It is part of the Sea Battalion [ 2 ] and is based in Eckernförde . The mine clearance diver company consists of soldiers at its headquarters in Eckenförde and those assigned to various German navy vessels. It primarily operates in German territorial waters such as the Baltic Sea , clearing naval mines and other hazards. It also supports search and recovery operations involving sunken ships, submarines and airplanes. In autumn 1985 the unit saw its first overseas engagement, clearing freshly laid mines in the Suez Canal . It has since served in several parts of the world as a part of NATO military deployments and exercises. Members of the company have also deployed with German special Forces on various Missions. The Unit is currently equipped with the Stealth EOD M for diving, as well as the LAR VII for shallow water operations.
https://en.wikipedia.org/wiki/Minentaucher
The Mineral Products Association ( MPA ) is the United Kingdom trade association for the aggregates , asphalt , cement , concrete , dimension stone, lime , mortar , and industrial sand industries. The MPA, with the affiliation of the British Association of Reinforcement, British Calcium Carbonates Federation, Eurobitume UK, and United Kingdom Quality Ash Association, has a growing membership of 520 companies and is the sectoral voice for mineral products. [ 1 ] MPA membership is made up of the vast majority of independent SME quarrying companies throughout the UK, as well as the nine major international and global companies. It covers 100% of UK cement production, 90% of aggregates production, 95% of asphalt and over 70% of ready-mixed concrete and precast concrete production. Each year the industry supplies £22 billion worth of materials and services to the UK economy and is the largest supplier to the construction industry, which has annual output valued at £144 billion. Industry production represents the largest materials flow in the UK economy and is also one of the largest manufacturing sectors. [ 2 ] The MPA was formed in March 2009 from the merger of the Quarry Products Association, the British Cement Association and The Concrete Centre. It was officially launched in June 2009. [ 3 ] The MPA has offices in London, Glasgow and Fron in Wales. QPA Northern Ireland is affiliated to the MPA and has offices in Crumlin, County Antrim . [ 4 ] The British Precast Concrete Federation (BPCF), the trade association of precast concrete manufacturers, is a member of the MPA and is based in Leicester. [ 5 ] The MPA has regional divisions, for London & South East, South West, East Anglia, Midlands, Wales, North, Scotland and Northern Ireland. The Concrete Centre was formed in 2003 and since 2009 has been part of the MPA. The Concrete Centre promotes the use of concrete in construction through the provision of resources to enable designers to follow best practice for the design of concrete and masonry. The Concrete Centre publishes a journal, Concrete Quarterly which was first published by the Cement and Concrete Association in 1947. The journal showcases the use of concrete in construction projects in the United Kingdom and worldwide. The CQ archive is available online. [ 6 ]
https://en.wikipedia.org/wiki/Mineral_Products_Association
Mineral economics is the academic discipline that investigates and promotes understanding of economic and policy issues associated with the production and use of mineral commodities. [ 1 ] Mineral economics [′min·rəl ‚ek·ə′näm·iks] is specially concerned with the analysis and understanding of mineral distribution as well as the ‘discovery, exploitation, and marketing of minerals’. [ 2 ] Mineral economics is an academic discipline which constructs policies regarding mineral commodities and their global distribution. [ 3 ] The discipline of mineral economics examines the success and the implications associated with the mining industry and the impact the industry has on the economy socially and regarding the climate. [ 4 ] Mineral economics is a continuing, evolving field which originally started after the Second World War and has continued to expand in today's modern climate. [ 4 ] The identification of mineral sectors and their associated total revenue from specific commodities and how this varies across Countries is significant for global trade and fecundity. [ 5 ] Australia is a leading export in several mineral commodities thus providing a substantial percentage of revenue within the Australian economy. [ 6 ] Other various leaders regarding mineral trading and contributions also holds significance in understanding and forming concise parameters to apply and construct. The establishment of such findings addresses concerns regarding societal support and sustainability concerns. The sustainability of the mining industry is also a key focus and how its direct impact on the environment must be monitored and necessary parameters applied. [ 7 ] Mineral economics did not become an academic discipline until after the Second World War, with the majority of current research being completed in other disciplines and fields. [ 4 ] Although, mineral economics has continued to develop since the 1940s by recognising the demand of such mineral commodities and the increase seen in trade globally. [ 3 ] From the late 1980s to early 1990s the demand of such mineral and metal products was minimal, with the perception of ’low rates of economic growth’ and ‘decline metal intensity of use’ the mineral economics sector was at risk of a ‘long-term decline’. [ 3 ] During the 1990s, economic transition became increasingly relevant across the globe. [ 3 ] The proposal of foreign investment and trade, initially in response to the perceived ‘long-term decline’, promoted the demand of mineral resources and in doing so enhanced today's associated revenue of the sector. [ 3 ] Sustainability concerning mineral economics was first introduced and discussed in 1993. [ 3 ] Sustainability within the mineral sector concerns the following criteria; commercially viable, consistent with social preferences for the environment and acceptable social consequences. [ 3 ] Mineral economics is a discipline that concerns several countries globally. [ 8 ] Global parameters and perspectives are necessary to ensure impartial diversity across sectors regarding both trading and contribution. [ 8 ] The Mining Contribution Index WIDER (MCI-W) ranked the Countries with the largest mining contribution in 2014. [ 9 ] The following five Countries listed in descending order; DRC, Chile, Australia, Mongolia and Papua New Guinea are the leading Countries to attain the largest mineral contribution globally. [ 9 ] The impact of distributing such mineral commodities has a major effect on the economy internationally, often contributing to employment and generating income. [ 8 ] The global demand of Mineral Economics has the potential to cause both positive and negative outcomes on society and the environment. [ 10 ] Implementing concise and fair access to mineral commodities was recommended by the Neighbourhood, Development and International Cooperation Instrument (NDICI) in 2021, although this recommendation has not yet been published. [ 10 ] Creating a more renowned and inclusive mineral economy has been suggested to encourage higher sustainability of mineral economics respective to the abundance and market value of such commodities. [ 10 ] Mineral resources are an increasingly valuable commodity within Australia's mining and mineral sector. [ 4 ] Australia's largest exports include ‘coal, oil and gas, metals, non-metals and construction materials’, and their mass distribution accounts for a substantial revenue into the Australian economy. [ 6 ] Mineral economics has major influence on government policies which ultimately has systematic implications for the sectors overall success and performance. [ 11 ] The mineral economic sector has limiting factors despite the precedented revenue, specifically oil producing nations regarding ‘debt, deficits, inflation and an inefficient public sector ’. [ 12 ] Consequently, the economic growth seen globally congregates the mineral sector to construct policies and procedures to predict both economic growth and depletion, as well as ensuring socioeconomic viable policies. Such policies also alleviates limiting factors previously mentioned, while also providing the opportunity for trends and associated revenue to be predicted and analysed which offers the potential to provide additional structures of parameters to limit inflation and deficits within the sector. [ 13 ] The mineral sector is a major contributor to the Australian economy, specifically regarding its profiting revenue. The Australian mineral sector contributes ‘8 per cent of Gross Domestic Product’ into the economy. [ 6 ] Australia's exportation of black coal, iron ore, alumina, lead and zinc is identified as the largest global distributor. [ 6 ] Mineral commodities and their distribution does not only provide profit to distributors but also offers support socioeconomically. [ 13 ] The Australian economy and its leading distributor status, also promotes revenue in worldwide trade through export and relations. [ 6 ] Despite this associated contribution, the mineral sector is ‘capital intensive’, relying heavily on machinery, which ultimately only supplements ‘2% of jobs’ within the mining sector, having minimal impact on overall economic benefit. [ 12 ] Foreign trade revenue attains contradictory elements also, due to the foreign stakeholders associated within the mining industry and their affiliated revenue, limiting overall economic value for Australia. [ 12 ] In today's current climate, concerns are present regarding the sustainability of mineral resources. [ 5 ] While the mineral sector provides a substantial income into the economy seen in several leading Countries contributing to exports. Mineral economics and the associated sectors, has established concerns effecting the endurance associated with mineral exportation and its associated income. [ 5 ] The identification of such sustainability concerns, in relation to different sectors has been heavily discussed in recent years. [ 7 ] Aspects such as climate change as well as the production and distribution of mineral commodities within the mining and mineral sector have been determined as significant in relation to concerns of mineral economics. [ 7 ] The future of minerals and their integration within society relies heavily on mineral economics and the policies constructed. [ 14 ] The integration of sustainable energy supplementation reveals concerns regarding the success and future of mineral usage, however it is important to note that technological advancements can not ‘replace energy’ entirely. [ 14 ] Despite the current concerns of mineral availability in the future and an expected decline in minerals, a precedented increase of associated costs regarding mineral commodities is precedented. [ 14 ] This heightens the necessity of implementing technologies and sustainable practices ensuring longevity of mineral resources and sectors, through recycling mineral resources and ensuring adequate policies are constructed reflective of both trade and exports. [ 14 ] This economic term article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mineral_economics
In inorganic chemistry , mineral hydration is a reaction which adds water to the crystal structure of a mineral , usually creating a new mineral, commonly called a hydrate . In geological terms, the process of mineral hydration is known as retrograde alteration and is a process occurring in retrograde metamorphism . It commonly accompanies metasomatism and is often a feature of wall rock alteration around ore bodies . Hydration of minerals occurs generally in concert with hydrothermal circulation , which may be driven by tectonic or igneous activity. There are two main ways in which minerals hydrate. One is conversion of an oxide to a double hydroxide , as with the hydration of calcium oxide —CaO—to calcium hydroxide —Ca(OH) 2 . The other is with the incorporation of water molecules directly into the crystalline structure of a new mineral, [ 1 ] as with the hydration of feldspars to clay minerals , garnet to chlorite , or kyanite to muscovite . [ citation needed ] Mineral hydration is also a process in the regolith that results in conversion of silicate minerals into clay minerals. [ citation needed ] Some mineral structures, for example, montmorillonite , are capable of including a variable amount of water without significant change to the mineral structure. [ citation needed ] Hydration is the mechanism by which hydraulic binders such as Portland cement develop strength. A hydraulic binder is a material that can set and harden submerged in water by forming insoluble products in a hydration reaction. The term hydraulicity or hydraulic activity is indicative of the chemical affinity of the hydration reaction. [ 2 ] Examples of hydrated minerals include:
https://en.wikipedia.org/wiki/Mineral_hydration
In metallurgy , mineral jigs are a type of gravity concentrator , separating materials with different densities . It is widely used in recovering valuable heavy minerals such as gold , platinum , tin , tungsten , as well as gemstones such as diamond and sapphire, from alluvial or placer deposits . Base metals such as iron , manganese , and barite can also be recovered using jigs. The process begins with flowing a stream of liquid-suspended material over a screen and subjecting the screen to a vertical hydraulic pulsation. This pulsation momentarily expands or dilates the screen bed and allows the heavier materials to work toward the bottom. Heavier material finer than the screen openings will gradually work through the beds and the retention screen into the hutch, or lower compartment. That material, the concentrate, is discharged from this compartment or hutch through a spigot . If the concentrate is coarser than the screen, it will work down to the top of the shot bed, and can be withdrawn either continuously or intermittently. The lighter material, or tailing, will be rejected over the end of the jig. [ 1 ] The mineral jig has certain advantages in placer and hardrock mill flowsheets. In gold recovery, the jigs produce highly concentrated products which can be easily upgraded by methods such as barrel amalgamation, treating across shaking tables or processing through centrifugal concentrators. In other placer operations the heavy minerals being sought are recovered efficiently and cheaply with similar high ratios of concentration. In iron, manganese , and base metal treatment flowsheets, the jigs are operated to produce marketable grades of concentrate , or, as pre-concentration devices, to reject barren gangue prior to the ore entering the fine grinding section of the mill flowsheet. [ 2 ] The construction of the mineral jig results in maximum utilization of floor area and minimum head room requirements, permitting greater capacity per unit of operating floor area than, for example, shaking tables or other devices such as jig concentrators .
https://en.wikipedia.org/wiki/Mineral_jig
Mineral oil is any of various colorless, odorless, light mixtures of higher alkanes from a mineral source, particularly a distillate of petroleum , [ 1 ] as distinct from usually edible vegetable oils . The name 'mineral oil' by itself is imprecise, having been used for many specific oils, since 1771. Other names, similarly imprecise, include 'white oil', 'paraffin oil', ' liquid paraffin ' (a highly refined medical grade ), paraffinum liquidum ( Latin ), and 'liquid petroleum'. Most often, mineral oil is a liquid obtained from refining crude oil to make gasoline and other petroleum products . Mineral oils used for lubrication are known specifically as base oils . More generally, mineral oil is a transparent , colorless oil, composed mainly of alkanes [ 2 ] and cycloalkanes , related to petroleum jelly . It has a density of around 0.8–0.87 g/cm 3 (0.029–0.031 lb/cu in). [ 3 ] Some of the imprecision in the definition of the names used for mineral oil (such as 'white oil') reflects usage by consumers and merchants who did not know, and usually had no need of knowing, the oil's precise chemical makeup. Merriam-Webster states the first use of the term "mineral oil" as being 1771. [ 4 ] Prior to the late 19th century, the chemical science to determine the makeup of an oil was unavailable in any case. A similar lexical situation occurred with the term " white metal ". "Mineral oil", sold widely and cheaply in the United States, is not sold as such in the United Kingdom. Instead, British pharmacologists use the terms "paraffinum perliquidum" for light mineral oil and "paraffinum liquidum" or "paraffinum subliquidum" for somewhat more viscous varieties. The term "paraffinum liquidum" is often seen on the ingredient lists of baby oil and cosmetics . British aromatherapists commonly use the term "white mineral oil". In lubrication , mineral oils make up Group I, II, and III base oils that are refined from petroleum. [ 5 ] [ 6 ] The World Health Organization classifies minimally treated mineral oils as carcinogens group 1 known to be carcinogenic to humans; [ 7 ] Highly refined oils are classified group 3 as not suspected to be carcinogenic, from known available information sufficient to classify them as harmless. [ 8 ] The UK Food Standards Agency (FSA) carried out a risk assessment on the migration of components from printing inks used on carton-board packaging—including mineral oils—into food in 2011, based on the findings of a survey conducted in the same year. The FSA did not identify any specific food safety concerns due to inks. [ 9 ] People can be exposed to mineral oil mist in the workplace through inhalation, skin contact, or eye contact. In the United States, the Occupational Safety and Health Administration has set the legal limit for mineral oil mist exposure in the workplace as 5 mg/m 3 (0.0022 gr/cu ft) over an 8-hour workday, the National Institute for Occupational Safety and Health has set a recommended exposure limit of 5 mg/m 3 (0.0022 gr/cu ft) over an 8-hour workday, with a previous limit of 10 mg/m 3 (0.0044 gr/cu ft) for short-term exposure rescinded according to the 2019 Guide to Occupational Exposure Values compiled by the ACGIH . Levels of 2,500 mg/m 3 (1.1 gr/cu ft) and higher are indicated as immediately dangerous to life and health . However, current toxicological data [ which? ] [ whose? ] does not contain any evidence of irreversible health effects due to short-term exposure at any level; the current value of 2,500 mg/m 3 (1.1 gr/cu ft) is indicated as being arbitrary. [ 10 ] Mineral oil is used as a laxative to alleviate constipation by retaining water in stool and the intestines . [ 11 ] Although generally considered safe, as noted above, there is a concern of mist inhalation leading to serious health conditions such as pneumonia . [ 12 ] Mineral oil can be administered either orally [ 13 ] or rectally. [ 14 ] It is sometimes used as a lubricant in enema preparations as most of the ingested material is excreted in the stool rather than being absorbed by the body. [ 15 ] It is recommended by the American Society for Reproductive Medicine for use as a fertility-preserving vaginal lubrication . [ 16 ] However, it is known that oils degrade latex condoms . [ 17 ] Mineral oil of special purity is often used as an overlay covering micro drops of culture medium in petri dishes , during the culture of oocytes and embryos in IVF and related procedures. The use of oil presents several advantages over the open culture system: it allows for several oocytes and embryos to be cultured simultaneously, but observed separately, in the same dish; it minimizes concentration and pH changes by preventing evaporation of the medium; it allows for a significant reduction of the medium volume used (as few as 20 μl (0.0012 cu in) per oocyte instead of several milliliters for the batch culture); and it serves as a temperature buffer minimizing thermal shock to the cells while the dish is taken out of the incubator for observation. Over-the-counter veterinarian -use mineral oil is intended as a mild laxative for pets and livestock. [ 18 ] Certain mineral oils are used in livestock vaccines , as an adjuvant to stimulate a cell-mediated immune response to the vaccinating agent. In the poultry industry , plain mineral oil can also be swabbed onto the feet of chickens infected with scaly mites on the shank, toes, and webs. Mineral oil suffocates these tiny parasites. [ 19 ] In beekeeping , food grade mineral oil-saturated paper napkins placed in hives are used as a treatment for tracheal and other mites . It is also used along with a cotton swab to remove un-shed skin ( ashes ) on reptiles such as lizards and snakes. Mineral oil is a common ingredient in baby lotions , cold creams , ointments , and cosmetics. It is a lightweight inexpensive oil that is odorless and tasteless. It can be used on eyelashes to prevent brittleness and breaking and, in cold cream , is also used to remove creme make-up and temporary tattoos . One of the common concerns regarding the use of mineral oil is its presence on several lists of comedogenic substances. [ citation needed ] These lists of comedogenic substances were developed many years ago and are frequently quoted in the dermatological literature. The type of highly refined and purified mineral oil found in cosmetic and skincare products is noncomedogenic (does not clog pores). [ 20 ] Mineral oil is used in a variety of industrial/mechanical capacities as a non-conductive coolant or thermal fluid in electric components, as it does not conduct electricity and functions to displace air and water. Some examples are in transformers , where it is known as transformer oil , and in high-voltage switchgear , where mineral oil is used as an insulator and coolant to disperse switching arcs. [ 21 ] Because it is noncompressible, mineral oil is used as a hydraulic fluid in hydraulic machinery and vehicles. The dielectric constant of mineral oil ranges from 2.3 at 50 °C (122 °F) to 2.1 at 200 °C (392 °F). [ 22 ] Electric space heaters sometimes use mineral oil as a heat transfer oil . Lubricants used for older refrigerator and air conditioning compressors are based on mineral oil, especially those using R-22 refrigerant . Mineral oil is used as a lubricant , a cutting fluid , and as a conditioning oil for jute fibres selected for textile production , a process known as 'jute batching'. [ 23 ] Spindle oils are light mineral oils used as lubricants in textile industries. An often-cited limitation of mineral oil is that it is poorly biodegradable; in some applications, vegetable oils such as cottonseed oil or rapeseed oil may be used instead. [ 24 ] Because of its properties that prevent water absorption, combined with its lack of flavor and odor, food grade mineral oil is a popular preservative for wooden cutting boards , countertops, salad bowls , and utensils . Periodically rubbing a small amount of mineral oil into a wooden kitchen item impedes absorption of food liquids, and thereby food odors, easing the process of hygienically cleaning wooden utensils and equipment. The use of mineral oil to impede water absorption can also prevent cracks and splits from forming in wooden utensils due to wetting and drying cycles. However, some of the mineral oil used on these items, if in contact with food, will be picked up by it and therefore ingested. [ citation needed ] Mineral oil is occasionally used in the food industry, particularly for confectionery . In this application, it is typically used for the glossy effect it produces, and to prevent the candy pieces from adhering to each other, such as in Swedish Fish . [ 25 ] The use of food grade mineral oil is self-limiting because of its laxative effect, and is not considered a risk in food for any age class. [ 26 ] The maximum daily intake is calculated to be about 100 mg (1.5 gr), of which some 80 mg (1.2 gr) are contributed from its use on machines in the baking industry. [ 15 ] Mineral oil, under various names, is one of the most widely used insecticides . [ 27 ] See Horticultural oil . Mineral oil's ubiquity has led to its use in some niche applications as well:
https://en.wikipedia.org/wiki/Mineral_oil
Mineral physics is the science of materials that compose the interior of planets, particularly the Earth. It overlaps with petrophysics , which focuses on whole-rock properties. It provides information that allows interpretation of surface measurements of seismic waves , gravity anomalies , geomagnetic fields and electromagnetic fields in terms of properties in the deep interior of the Earth. This information can be used to provide insights into plate tectonics , mantle convection , the geodynamo and related phenomena. Laboratory work in mineral physics require high pressure measurements. The most common tool is a diamond anvil cell , which uses diamonds to put a small sample under pressure that can approach the conditions in the Earth's interior. Many of the pioneering studies in mineral physics involved explosions or projectiles that subject a sample to a shock. For a brief time interval, the sample is under pressure as the shock wave passes through. Pressures as high as any in the Earth have been achieved by this method. However, the method has some disadvantages. The pressure is very non-uniform and is not adiabatic , so the pressure wave heats the sample up in passing. The conditions of the experiment must be interpreted in terms of a set of pressure-density curves called Hugoniot curves . [ 1 ] Multi-anvil presses involve an arrangement of anvils to concentrate pressure from a press onto a sample. Typically the apparatus uses an arrangement eight cube-shaped tungsten carbide anvils to compress a ceramic octahedron containing the sample and a ceramic or Re metal furnace. The anvils are typically placed in a large hydraulic press . The method was developed by Kawai and Endo in Japan. [ 2 ] Unlike shock compression, the pressure exerted is steady, and the sample can be heated using a furnace. Pressures of about 28 GPa (equivalent to depths of 840 km), [ 3 ] and temperatures above 2300 °C, [ 4 ] can be attained using WC anvils and a lanthanum chromite furnace. The apparatus is very bulky and cannot achieve pressures like those in the diamond anvil cell (below), but it can handle much larger samples that can be quenched and examined after the experiment. [ 5 ] Recently, sintered diamond anvils have been developed for this type of press that can reach pressures of 90 GPa (2700 km depth). [ 6 ] The diamond anvil cell is a small table-top device for concentrating pressure. It can compress a small (sub-millimeter sized) piece of material to extreme pressures , which can exceed 3,000,000 atmospheres (300 gigapascals ). [ 7 ] This is beyond the pressures at the center of the Earth . The concentration of pressure at the tip of the diamonds is possible because of their hardness , while their transparency and high thermal conductivity allow a variety of probes can be used to examine the state of the sample. The sample can be heated to thousands of degrees. Achieving temperatures found within the interior of the earth is just as important to the study of mineral physics as creating high pressures. Several methods are used to reach these temperatures and measure them. Resistive heating is the most common and simplest to measure. The application of a voltage to a wire heats the wire and surrounding area. A large variety of heater designs are available including those that heat the entire diamond anvil cell (DAC) body and those that fit inside the body to heat the sample chamber. Temperatures below 700 °C can be reached in air due to the oxidation of diamond above this temperature. With an argon atmosphere, higher temperatures up to 1700 °C can be reached without damaging the diamonds. A tungsten resistive heater with Ar in a BX90 DAC was reported to achieve temperatures of 1400 °C. [ 8 ] Laser heating is done in a diamond-anvil cell with Nd:YAG or CO 2 lasers to achieve temperatures above 6000k. Spectroscopy is used to measure black-body radiation from the sample to determine the temperature. Laser heating is continuing to extend the temperature range that can be reached in diamond-anvil cell but suffers two significant drawbacks. First, temperatures below 1200 °C are difficult to measure using this method. Second, large temperature gradients exist in the sample because only the portion of sample hit by the laser is heated. [ citation needed ] To deduce the properties of minerals in the deep Earth, it is necessary to know how their density varies with pressure and temperature . Such a relation is called an equation of state (EOS). A simple example of an EOS that is predicted by the Debye model for harmonic lattice vibrations is the Mie-Grünheisen equation of state: where C V {\displaystyle C_{V}} is the heat capacity and γ D {\displaystyle \gamma _{D}} is the Debye gamma. The latter is one of many Grünheisen parameters that play an important role in high-pressure physics. A more realistic EOS is the Birch–Murnaghan equation of state . [ 9 ] : 66–73 Inversion of seismic data give profiles of seismic velocity as a function of depth. These must still be interpreted in terms of the properties of the minerals. A very useful heuristic was discovered by Francis Birch : plotting data for a large number of rocks, he found a linear relation of the compressional wave velocity v p {\displaystyle v_{p}} of rocks and minerals of a constant average atomic weight M ¯ {\displaystyle {\overline {M}}} with density ρ {\displaystyle \rho } : [ 10 ] [ 11 ] This relationship became known as Birch's law . This makes it possible to extrapolate known velocities for minerals at the surface to predict velocities deeper in the Earth. There are a number of experimental procedures designed to extract information from both single and powdered crystals. Some techniques can be used in a diamond anvil cell (DAC) or a multi anvil press (MAP). Some techniques are summarized in the following table. Using quantum mechanical numerical techniques, it is possible to achieve very accurate predictions of crystal's properties including structure, thermodynamic stability, elastic properties and transport properties. The limit of such calculations tends to be computing power, as computation run times of weeks or even months are not uncommon. [ 9 ] : 107–109 The field of mineral physics was not named until the 1960s, but its origins date back at least to the early 20th century and the recognition that the outer core is fluid because seismic work by Oldham and Gutenberg showed that it did not allow shear waves to propagate. [ 16 ] A landmark in the history of mineral physics was the publication of Density of the Earth by Erskine Williamson, a mathematical physicist, and Leason Adams, an experimentalist. Working at the Geophysical Laboratory in the Carnegie Institution of Washington , they considered a problem that had long puzzled scientists. It was known that the average density of the Earth was about twice that of the crust , but it was not known whether this was due to compression or changes in composition in the interior. Williamson and Adams assumed that deeper rock is compressed adiabatically (without releasing heat) and derived the Adams–Williamson equation , which determines the density profile from measured densities and elastic properties of rocks. They measured some of these properties using a 500-ton hydraulic press that applied pressures of up to 1.2 gigapascals (GPa). They concluded that the Earth's mantle had a different composition than the crust, perhaps ferromagnesian silicates, and the core was some combination of iron and nickel. They estimated the pressure and density at the center to be 320 GPa and 10,700 kg/m 3 , not far off the current estimates of 360 GPa and 13,000 kg/m 3 . [ 17 ] The experimental work at the Geophysical Laboratory benefited from the pioneering work of Percy Bridgman at Harvard University , who developed methods for high-pressure research that led to a Nobel Prize in Physics . [ 17 ] A student of his, Francis Birch , led a program to apply high-pressure methods to geophysics. [ 18 ] Birch extended the Adams-Williamson equation to include the effects of temperature. [ 17 ] In 1952, he published a classic paper, Elasticity and constitution of the Earth's interior , in which he established some basic facts: the mantle is predominantly silicates ; there is a phase transition between the upper and lower mantle associated with a phase transition; and the inner and outer core are both iron alloys. [ 19 ]
https://en.wikipedia.org/wiki/Mineral_physics
Mineral processing is the process of separating commercially valuable minerals from their ores in the field of extractive metallurgy . [ 1 ] Depending on the processes used in each instance, it is often referred to as ore dressing or ore milling . Beneficiation is any process that improves (benefits) the economic value of the ore by removing the gangue minerals , which results in a higher grade product ( ore concentrate ) and a waste stream ( tailings ). There are many different types of beneficiation, with each step furthering the concentration of the original ore. Key is the concept of recovery , the mass (or equivalently molar) fraction of the valuable mineral (or metal) extracted from the ore and carried across to the concentrate. Before the advent of heavy machinery, raw ore was broken up using hammers wielded by hand, a process called " spalling ". Eventually, mechanical means were found to achieve this. For instance, stamp mills were being used in central Asia in the vicinity of Samarkand as early as 973. There is evidence the process was in use in Persia in the early medieval period . By the 11th century, stamp mills were in widespread use throughout the medieval Islamic world , from Islamic Spain and North Africa in the west to Central Asia in the east. [ 2 ] A later example was the Cornish stamps , consisting of a series of iron hammers mounted in a vertical frame, raised by cams on the shaft of a waterwheel and falling onto the ore under gravity. Iron beneficiation has been evident since as early as 800 BC in China with the use of bloomery . [ 3 ] A bloomery is the original form of smelting and allowed people to make fires hot enough to melt oxides into a liquid that separates from the iron. Although the bloomery was promptly phased out by the invention of the blast furnace , it was still heavily relied on in Africa and Europe until the early part of the second millennium. The blast furnace was the next step in smelting iron which produced pig iron . [ 4 ] The first blast furnaces in Europe appeared in the early 1200s around Sweden and Belgium, and not until the late 1400s in England. The pig iron poured from a blast furnace is high in carbon making it hard and brittle, making it hard to work with. In 1856 the Bessemer process was invented that turns the brittle pig iron into steel, a more malleable metal. [ 4 ] Since then, many different technologies have been invented to replace the Bessemer process such as the electric arc furnace , basic oxygen steelmaking , and direct reduced iron (DRI). [ 5 ] For sulfide ores, a different process is taken for beneficiation. The ore needs to have the sulfur removed before smelting can begin. Roasting is the primary method of separating, where wood was placed on heaps of ore and set on fire to help with oxidation. [ 6 ] [ 7 ] The earliest practices of roasting were done outside, allowing large clouds of sulfur dioxide to blow over the land causing serious harm to surrounding ecosystems, both aquatic and terrestrial. The clouds of sulfur dioxide combined with local deforestation for wood needed for roasting compounded damages to the environment, [ 6 ] as seen in Sudbury , Ontario and the Inco Superstack . [ 7 ] The simplest method of separating ore from the gangue consists of picking out the individual crystals of each. This is a very tedious process, particularly when the individual particles are small. Another comparatively simple method relies on the various minerals having different densities , causing them to collect in different places: metallic minerals (being heavier) will drop out of suspension more quickly than lighter ones, which will be carried further by a stream of water. The process of panning and sifting for gold uses both of these methods. Various devices known as 'bundles' were used to take advantage of this property. [ when? ] Later, more advanced machines were used such as the Frue vanner , invented in 1874. Other equipment used historically includes the hutch, a trough used with some ore-dressing machines and the keeve or kieve, a large tub used for differential settlement. Beneficiation can begin within the mine itself. Most mines will have a crusher within the mine itself where separation of ore and gangue minerals occurs and as a side effect becomes easier to transport. After the crusher the ore will go through a grinder or a mill to get the ore into fine particles. Dense media separation (DMS) is used to further separate the desired ore from rocks and gangue minerals. This will stratify the crushed aggregate by density making separation easier. Where the DMS occurs in the process can be important, the grinders or mills will process much less waste rock if the DMS occurs beforehand. This will lower wear on the equipment as well as operating costs since there is a lower volume being put through. [ 8 ] After the milling stage the ore can be further separated from the rock. One way this can be achieved is by using the physical properties of the ore to separate it from the rest of the rock. Prior to any physical separation process, sizing of ore particles is important for effective separation. This is done by using either Industrial Screens or Classifiers. [ 9 ] These processes are gravity separation , flotation, and magnetic separation . Gravity separation uses centrifugal forces and specific gravity of ores and gangue to separate them. [ 10 ] Magnetic separation is used to separate magnetic gangue from the desired ore, or conversely to remove a magnetic target ore from nonmagnetic gangue. [ 11 ] DMS is also considered a physical separation. Some ore physical properties can not be relied on for separation, therefore chemical processes are used to separate the ores from the rock. Froth flotation , leaching , and electrowinning are the most common types of chemical separation. Froth flotation uses hydrophobic and hydrophilic properties to separate the ore from the gangue. Hydrophobic particles will rise to the top of the solution to be skimmed off. [ 12 ] [ 13 ] Changes to pH in the solution can influence what particles will be hydrophilic. Leaching works by dissolving the desired ore into solution from the rock. [ 14 ] Electrowinning is not a primary method of separation, but is required to get the ore out of solution after leaching. Mineral processing can involve four general types of unit operation: 1) Comminution – particle size reduction; 2) Sizing – separation of particle sizes by screening or classification; 3) Concentration by taking advantage of physical and surface chemical properties; and 4) Dewatering – solid/liquid separation. In all of these processes, the most important considerations are the economics of the processes, which is dictated by the grade and recovery of the final product. To do this, the mineralogy of the ore needs to be considered as this dictates the amount of liberation required and the processes that can occur. The smaller the particles processes, the greater the theoretical grade and recovery of the final product. This, however, becomes difficult to do with fine particles since they prevent certain concentration processes from occurring. Comminution is particle size reduction of materials. Comminution may be carried out on either dry materials or slurries. Crushing and grinding are the two primary comminution processes. Crushing is normally carried out on run-of-mine [ 15 ] ore, while grinding (normally carried out after crushing) may be conducted on dry or slurried material. In comminution, the size reduction of particles is done by three types of forces: compression, impact and attrition. Compression and impact forces are extensively used in crushing operations while attrition is the dominant force in grinding. The primarily used equipment in crushing are jaw crushers, gyratory crushers and cone crushers whereas rod mills and ball mills , usually closed circuited with a classifier unit, are generally employed for grinding purposes in a mineral processing plant. Crushing is a dry process whereas grinding is generally performed wet and hence is more energy intensive. Sizing is the general term for separation of particles according to their size. The simplest sizing process is screening, or passing the particles to be sized through a screen or number of screens. Screening equipment can include grizzlies, [ 16 ] bar screens, wedge wire screens, radial sieves, banana screens, multi-deck screens, vibratory screen, fine screens, flip flop screens, and wire mesh screens. Screens can be static (typically the case for very coarse material), or they can incorporate mechanisms to shake or vibrate the screen. Some considerations in this process include the screen material, the aperture size, shape and orientation, the amount of near sized particles, the addition of water, the amplitude and frequency of the vibrations, the angle of inclination, the presence of harmful materials, like steel and wood, and the size distribution of the particles. Classification refers to sizing operations that exploit the differences in settling velocities exhibited by particles of different size. Classification equipment may include ore sorters , gas cyclones , hydrocyclones , rotating trommels , rake classifiers or fluidized classifiers. An important factor in both comminution and sizing operations is the determination of the particle size distribution of the materials being processed, commonly referred to as particle size analysis . Many techniques for analyzing particle size are used, and the techniques include both off-line analyses which require that a sample of the material be taken for analysis and on-line techniques that allow for analysis of the material as it flows through the process. There are a number of ways to increase the concentration of the wanted minerals: in any particular case, the method chosen will depend on the relative physical and surface chemical properties of the mineral and the gangue. Concentration is defined as the number of moles of a solute in a volume of the solution. In case of mineral processing, concentration means the increase of the percentage of the valuable mineral in the concentrate. Gravity separation is the separation of two or more minerals of different specific gravity by their relative movement in response to the force of gravity and one or more other forces (such as centrifugal forces, magnetic forces, buoyant forces), one of which is resistance to motion (drag force) by a viscous medium such as heavy media, water or, less commonly, air. Gravity separation is one of the oldest technique in mineral processing but has seen a decline in its use since the introduction of methods like flotation, classification, magnetic separation and leaching. Gravity separation dates back to at least 3000 BC when Egyptians used the technique for separation of gold. It is necessary to determine the suitability of a gravity concentration process before it is employed for concentration of an ore. The concentration criterion is commonly used for this purpose, designated C C {\displaystyle CC} in the following equation (where S G {\displaystyle SG} represents specific gravity): Although concentration criteria is a useful rule of thumb when predicting amenability to gravity concentration, factors such as particle shape and relative concentration of heavy and light particles can dramatically affect separation efficiency in practice. There are several methods that make use of the weight or density differences of particles: [ 17 ] These processes can be classified as either density separation or gravity (weight) separation. In dense media separation a media is created with a density in between the density of the ore and gangue particles. When subjected to this media particles either float or sink depending on their density relative to the media. In this way the separation takes place purely on density differences and does not, in principle, relay on any other factors such as particle weight or shape. In practice, particle size and shape can affect separation efficiency. Dense medium separation can be performed using a variety of mediums. These include, organic liquids, aqueous solutions or suspensions of very fine particles in water or air. The organic liquids are typically not used due to their toxicity, difficulties in handling and relative cost. Industrially, the most common dense media is a suspension of fine magnetite and/or ferrosilicon particles. An aqueous solution as a dense medium is used in coal processing in the form of a belknap wash and suspensions in air are used in water-deficient areas, like areas of China, where sand is used to separate coal from the gangue minerals. Gravity separation is also called relative gravity separation as it separates particles due to their relative response to a driving force. This is controlled by factors such as particle weight, size and shape. These processes can also be classified into multi-G and single G processes. The difference is the magnitude of the driving force for the separation. Multi-G processes allow the separation of very fine particles to occur (in the range of 5 to 50 micron) by increasing the driving force of separation in order to increase the rate at which particles separate. In general, single G process are only capable of processing particles that are greater than approximately 80 micron in diameter. Of the gravity separation processes, the spiral concentrators and circular jigs are two of the most economical due to their simplicity and use of space. They operate by flowing film separation and can either use washwater or be washwater-less. The washwater spirals separate particles more easily but can have issues with entrainment of gangue with the concentrate produced. Froth flotation is an important concentration process. This process can be used to separate any two different particles and operated by the surface chemistry of the particles. In flotation, bubbles are introduced into a pulp and the bubbles rise through the pulp. [ 19 ] In the process, hydrophobic particles become bound to the surface of the bubbles. The driving force for this attachment is the change in the surface free energy when the attachment occurs. These bubbles rise through the slurry and are collected from the surface. To enable these particles to attach, careful consideration of the chemistry of the pulp needs to be made. These considerations include the pH, Eh and the presence of flotation reagents. The pH is important as it changes the charge of the particles surface and the pH affects the chemisorption of collectors on the surface of the particles. The addition of flotation reagents also affects the operation of these processes. The most important chemical that is added is the collector. This chemical binds to the surface of the particles as it is a surfactant . The main considerations in this chemical is the nature of the head group and the size of the hydrocarbon chain. The hydrocarbon tail needs to be short to maximize the selectivity of the desired mineral and the headgroup dictates which minerals it attaches to. The frothers are another important chemical addition to the pulp or slurry as they enable stable bubbles to be formed. This is important because if the bubbles coalesce, minerals will fall off their surface. The bubbles however should not be too stable as this prevents easy transportation and dewatering of the concentrate formed. The mechanism of these frothers is not completely known and further research into their mechanisms is being performed. Depressants and activators are used to selectively separate one mineral from another. Depressants inhibit the flotation of one mineral or minerals while activators enable the flotation of others. Examples of these include CN − , used to depress all sulfides but galena and this depressant is believed to operate by changing the solubility of chemisorbed and physisorbed collectors on sulfides. This theory originates from Russia. An example of an activator is Cu 2+ ions, used for the flotation of sphalerite. There are a number of cells able to be used for the flotation of minerals. these include flotation columns and mechanical flotation cells. The flotation columns are used for finer minerals and typically have a higher grade and lower recovery of minerals than mechanical flotation cells. The cells in use at the moment can exceed 300 m 3 . This is done as they are cheaper per unit volume than smaller cells, but they are not able to be controlled as easily as smaller cells. This process was invented in the 19th century in Australia. It was used to recover a sphalerite concentrate from tailings, produced using gravity concentration. Further improvements have come from Australia in the form of the Jameson Cell , developed at the University of Newcastle, Australia. This operated by the use of a plunging jet that generates fine bubbles. These fine bubbles have a higher kinetic energy and as such they can be used for the flotation of fine grained minerals, such as those produced by the IsaMill. Staged flotation reactors (SFRs) split the flotation process into three defined stages per cell. They are becoming increasingly more common in use as they require much less energy, air and installation space. There are two main types of electrostatic separators . These work in similar ways, but the forces applied to the particles are different and these forces are gravity and electrostatic attraction. The two types are electrodynamic separators (or high tension rollers) or electrostatic separators. In high tension rollers, particles are charged by a corona discharge. This charges the particles that subsequently travel on a drum. The conducting particles lose their charge to the drum and are removed from the drum with centripetal acceleration. Electrostatic plate separators work by passing a stream of particles past a charged anode. The conductors lose electrons to the plate and are pulled away from the other particles due to the induced attraction to the anode. These separators are used for particles between 75 and 250 micron and for efficient separation to occur, the particles need to be dry, have a close size distribution and uniform in shape. Of these considerations, one of the most important is the water content of the particles. This is important as a layer of moisture on the particles will render the non-conductors as conductors as the layer of the water is conductive. Electrostatic plate separators are usually used for streams that have small conductors and coarse non-conductors. The high tension rollers are usually used for streams that have coarse conductors and fine non-conductors. These separators are commonly used for separating mineral sands , an example of one of these mineral processing plants is the CRL processing plant at Pinkenba in Brisbane Queensland. In this plant, zircon , rutile and ilmenite are separated from the silica gangue. In this plant, the separation is performed in a number of stages with roughers, cleaners, scavengers and recleaners. Magnetic separation is a process in which magnetically susceptible material is extracted from a mixture using a magnetic force. This separation technique can be useful in mining iron as it is attracted to a magnet. In mines where wolframite was mixed with cassiterite , such as South Crofty and East Pool mine in Cornwall or with bismuth such as at the Shepherd and Murphy mine in Moina, Tasmania, magnetic separation was used to separate the ores. At these mines a device called a Wetherill's Magnetic Separator (invented by John Price Wetherill, 1844–1906)[1] was used. In this machine the raw ore, after calcination was fed onto a moving belt which passed underneath two pairs of electromagnets under which further belts ran at right angles to the feed belt. The first pair of electromagnets was weakly magnetised and served to draw off any iron ore present. The second pair were strongly magnetised and attracted the wolframite, which is weakly magnetic. These machines were capable of treating 10 tons of ore a day. This process of separating magnetic substances from the non-magnetic substances in a mixture with the help of a magnet is called magnetic separation.. This process operates by moving particles in a magnetic field. The force experienced in the magnetic field is given by the equation f=m/k.H.dh/dx. with k=magnetic susceptibility, H-magnetic field strength, and dh/dx being the magnetic field gradient. As seen in this equation, the separation can be driven in two ways, either through a gradient in a magnetic field or the strength of a magnetic field. The different driving forces are used in the different concentrators. These can be either with water or without. Like the spirals, washwater aids in the separation of the particles while increases the entrainment of the gangue in the concentrate. Modern, automated sorting applies optical sensors (visible spectrum, near infrared, X-ray, ultraviolet), that can be coupled with electrical conductivity and magnetic susceptibility sensors, to control the mechanical separation of ore into two or more categories on an individual rock by rock basis. Also new sensors have been developed which exploit material properties such as electrical conductivity, magnetization, molecular structure and thermal conductivity. Sensor based sorting has found application in the processing of nickel, gold, copper, coal and diamonds. Dewatering is an important process in mineral processing. The purpose of dewatering is to remove water absorbed by the particles which increases the pulp density. This is done for a number of reasons, specifically, to enable ore handling and concentrates to be transported easily, allow further processing to occur and to dispose of the gangue. The water extracted from the ore by dewatering is recirculated for plant operations after being sent to a water treatment plant. The main processes that are used in dewatering include dewatering screens, sedimentation, filtering, and thermal drying. These processes increase in difficulty and cost as the particle size decreases. Dewatering screens operate by passing particles over a screen. The particles pass over the screen while the water passes through the apertures in the screen. This process is only viable for coarse ores that have a close size distribution as the apertures can allow small particles to pass through. Sedimentation operates by passing water into a large thickener or clarifier. In these devices, the particles settle out of the slurry under the effects of gravity, or centripetal forces. These are limited by the surface chemistry of the particles and the size of the particles. To aid in the sedimentation process, flocculants and coagulants are added to reduce the repulsive forces between the particles. This repulsive force is due to the double layer formed on the surface of the particles. The flocculants work by binding multiple particles together while the coagulants work by reducing the thickness of the charged layer on the outside of the particle. After thickening, slurry is often stored in ponds or impoundments. Alternatively, it can pumped into a belt press or membrane filter press to recycle process water and create stackable, dry filter cake, or "tailings". [ 20 ] Thermal drying is usually used for fine particles and to remove low water content in the particles. Some common processes include rotary dryers, fluidized beds, spray driers, hearth dryers and rotary tray dryers. This process is usually expensive to operate due to the fuel requirement of the dryers. Many mechanical plants also incorporate hydrometallurgical or pyrometallurgical processes as part of an extractive metallurgical operation. Geometallurgy is a branch of extractive metallurgy that combines mineral processing with the geologic sciences. This includes the study of oil agglomeration [ 21 ] [ 22 ] [ 23 ] [ 24 ] A number of auxiliary materials handling operations are also considered a branch of mineral processing such as storage (as in bin design), conveying, sampling, weighing, slurry transport, and pneumatic transport. The efficiency and efficacy of many processing techniques are influenced by upstream activities such as mining method and blending . [ 25 ] In the case of gold, after adsorbing onto carbon, it is put into a sodium hydroxide and cyanide solution. In the solution the gold is pulled out of the carbon and into the solution. The gold ions are removed from solution at steel wool cathodes from electrowinning. The gold then goes off to be smelted. [ 14 ] Lithium is hard to separate from gangue due to similarities in the minerals. In order to separate the lithium both physical and chemical separation techniques are used. First froth flotation is used. Due to similarities in mineralogy there is not complete separation after flotation. The gangue that is found with lithium after the flotation are often iron bearing. The float concentrate goes through magnetic separation to remove the magnetic gangue from the nonmagnetic lithium. [ 26 ] EMC, the European Metallurgical Conference has developed to the most important networking business event dedicated to the non-ferrous metals industry in Europe. From the start of the conference sequence in 2001 at Friedrichshafen it was host to some of most relevant metallurgists from all countries of the world. The conference is held every two years by invitation of GDMB Society of Metallurgists and Miners and is particularly directed to metal producers, plant manufactures, equipment suppliers and service providers as well as members of universities and consultants.
https://en.wikipedia.org/wiki/Mineral_processing
In geology, a redox buffer is an assemblage of minerals or compounds that constrains oxygen fugacity as a function of temperature. Knowledge of the redox conditions (or equivalently, oxygen fugacities) at which a rock forms and evolves can be important for interpreting the rock history. Iron , sulfur , and manganese are three of the relatively abundant elements in the Earth's crust that occur in more than one oxidation state . For instance, iron, the fourth most abundant element in the crust, exists as native iron , ferrous iron (Fe 2+ ), and ferric iron (Fe 3+ ). The redox state of a rock affects the relative proportions of the oxidation states of these elements and hence may determine both the minerals present and their compositions. If a rock contains pure minerals that constitute a redox buffer, then the oxygen fugacity of equilibration is defined by one of the curves in the accompanying fugacity-temperature diagram. Redox buffers were developed in part to control oxygen fugacities in laboratory experiments to investigate mineral stabilities and rock histories. Each of the curves plotted in the fugacity-temperature diagram is for an oxidation reaction occurring in a buffer. These redox buffers are listed here in order of decreasing oxygen fugacity at a given temperature—in other words, from more oxidizing to more reducing conditions in the plotted temperature range. As long as all the pure minerals (or compounds) are present in a buffer assemblage, the oxidizing conditions are fixed on the curve for that buffer. Pressure has only a minor influence on these buffer curves for conditions in the Earth's crust . MH: magnetite - hematite : NiNiO: nickel -nickel oxide: FMQ: fayalite - magnetite - quartz : WM: wüstite - magnetite : IW: iron - wüstite : QIF: quartz - iron - fayalite : The ratio of Fe 2+ to Fe 3+ within a rock determines, in part, the silicate mineral and oxide mineral assemblage of the rock. Within a rock of a given chemical composition, iron enters minerals based on the bulk chemical composition and the mineral phases which are stable at that temperature and pressure. For instance, at redox conditions more oxidizing than the MH (magnetite-hematite) buffer, at least much of the iron is likely to be present as Fe 3+ and hematite is a likely mineral in iron-bearing rocks. Iron may only enter minerals such as olivine if it is present as Fe 2+ ; Fe 3+ cannot enter the lattice of fayalite olivine. Elements in olivine such as magnesium , however, stabilize olivine containing Fe 2+ to conditions more oxidizing than those required for fayalite stability. Solid solution between magnetite and the titanium -bearing endmember , ulvospinel , enlarges the stability field of magnetite. Likewise, at conditions more reducing than the IW (iron-wustite) buffer, minerals such as pyroxene can still contain Fe 3+ . The redox buffers therefore are only approximate guides to the proportions of Fe 2+ and Fe 3+ in minerals and rocks. Terrestrial igneous rocks commonly record crystallization at oxygen fugacities more oxidizing than the WM ( wüstite - magnetite ) buffer and more reduced than a log unit or so above the nickel-nickel oxide (NiNiO) buffer. Their oxidizing conditions thus are not far from those of the FMQ ( fayalite - magnetite - quartz ) redox buffer. Nonetheless, there are systematic differences that correlate with tectonic setting. Igneous rock emplaced and erupted in island arcs typically record oxygen fugacities 1 or more log units more oxidizing than those of the NiNiO buffer. In contrast, basalt and gabbro in non-arc settings typically record oxygen fugacities from about those of the FMQ buffer to a log unit or so more reducing than that buffer. Oxidizing conditions are common in some environments of deposition and diagenesis of sedimentary rocks. The fugacity of oxygen at the MH buffer ( magnetite - hematite ) is only about 10 −70 at 25 °C, but it is about 0.2 atmospheres in the Earth's atmosphere , so some sedimentary environments are far more oxidizing than those in magmas. Other sedimentary environments, such as the environments for formation of black shale , are relatively reducing. Oxygen fugacities during metamorphism extend to higher values than those in magmatic environments, because of the more oxidizing compositions inherited from some sedimentary rocks. Nearly pure hematite is present in some metamorphosed banded iron formations . In contrast, native nickel-iron is present in some serpentinites . Within meteorites , the iron - wüstite redox buffer may be more appropriate for describing the oxygen fugacity of these extraterrestrial systems. Sulfide minerals such as pyrite (FeS 2 ) and pyrrhotite (Fe 1−x S) occur in many ore deposits. Pyrite and its polymorph marcasite also are important in many coal deposits and shales . These sulfide minerals form in environments more reducing than that of the Earth's surface. When in contact with oxidizing surface waters, sulfides react: sulfate (SO 4 2− ) forms, and the water becomes acidic and charged with a variety of elements, some potentially toxic. Consequences can be environmentally harmful, as discussed in the entry for acid mine drainage . Sulfur oxidation to sulfate or sulfur dioxide also is important in generating sulfur-rich volcanic eruptions, like those of Pinatubo [ 3 ] in 1991 and El Chichon in 1982. These eruptions contributed unusually large quantities of sulfur dioxide to the Earth's atmosphere , with consequent effects on atmospheric quality and on climate. The magmas were unusually oxidizing, almost two log units more so than the NiNiO buffer. The calcium sulfate , anhydrite , was present as phenocrysts in the erupted tephra . In contrast, sulfides contain most of the sulfur in magmas more reducing than the FMQ buffer.
https://en.wikipedia.org/wiki/Mineral_redox_buffer
Mineral springs are naturally occurring springs that produce hard water , water that contains dissolved minerals . Salts , sulfur compounds , and gases are among the substances that can be dissolved in the spring water during its passage underground. In this they are unlike sweet springs , which produce soft water with no noticeable dissolved gasses. The dissolved minerals may alter the water's taste. Mineral water obtained from mineral springs, and the precipitated salts such as Epsom salt have long been important commercial products. Some mineral springs may contain significant amounts of harmful dissolved minerals, such as arsenic , and should not be drunk. [ 1 ] [ 2 ] Sulfur springs smell of rotten eggs due to hydrogen sulfide (H 2 S), which is hazardous and sometimes deadly . It is a gas, and it usually enters the body when it is breathed in. [ 3 ] The quantities ingested in drinking water are much lower and are not considered likely to cause harm, but few studies on long-term, low-level exposure have been done, as of 2003 [update] . [ 4 ] The water of mineral springs is sometimes claimed to have therapeutic value. Mineral spas are resorts that have developed around mineral springs, where (often wealthy) patrons would repair to "take the waters" — meaning that they would drink (see hydrotherapy and water cure ) or bathe in (see balneotherapy ) the mineral water. Historical mineral springs were often outfitted with elaborate stone-works — including artificial pools, retaining walls , colonnades , and roofs — sometimes in the form of fanciful "Greek temples", gazebos , or pagodas . Others were entirely enclosed within spring houses . For many centuries, in Europe, North America, and elsewhere, commercial proponents of mineral springs classified them according to the chemical composition of the water produced and according to the medicinal benefits supposedly accruing from each: Types of sedimentary rock – usually limestone ( calcium carbonate ) – are sometimes formed by the evaporation , or rapid precipitation , of minerals from spring water as it emerges, especially at the mouths of hot mineral springs. In cold mineral springs, the rapid precipitation of minerals results from the reduction of acidity when the CO 2 gas bubbles out. (These mineral deposits can also be found in dried lakebeds.) Spectacular formations, including terraces, stalactites , stalagmites and 'frozen waterfalls' can result (see, for example, Mammoth Hot Springs ). One light-colored porous calcite of this type is known as travertine and has been used extensively in Italy and elsewhere as building material. Travertine can have a white, tan, or cream-colored appearance and often has a fibrous or concentric 'grain'. Another type of spring water deposit, containing siliceous as well as calcareous minerals, is known as tufa . Tufa is similar to travertine but is even softer and more porous. Chaybeate springs may deposit iron compounds such as limonite . Some such deposits were large enough to be mined as iron ore .
https://en.wikipedia.org/wiki/Mineral_spring
Mineral water is water from a mineral spring that contains various minerals , such as salts and sulfur compounds . It is usually still, but may be sparkling ( carbonated / effervescent ). Traditionally, mineral waters were used or consumed at their spring sources, often referred to as "taking the waters" or "taking the cure," at places such as spas , baths and wells . Today, it is far more common for mineral water to be bottled at the source for distributed consumption. Travelling to the mineral water site for direct access to the water is now uncommon, and in many cases not possible because of exclusive commercial ownership rights. More than 4,000 brands of mineral water are commercially available worldwide. [ 1 ] In many places the term "mineral water" is colloquially used to mean any bottled carbonated water or soda water , as opposed to tap water . The more calcium and magnesium ions that are dissolved in water, the harder it is said to be; water with few dissolved calcium and magnesium ions is described as being soft . [ 2 ] The U.S. Food and Drug Administration classifies mineral water as water containing at least 250 parts per million total dissolved solids (TDS), originating from a geologically and physically protected underground water source. No minerals may be added to this water. [ 3 ] In the European Union , bottled water may be called mineral water when it is bottled at the source and has undergone no or minimal treatment. [ 4 ] Permitted is the removal of iron , manganese , sulfur and arsenic through decantation , filtration or treatment with ozone -enriched air, insofar as this treatment does not alter the composition of the water as regards to the essential constituents which give it its properties. No additions are permitted except for carbon dioxide , which may be added, removed or re-introduced by exclusively physical methods. No disinfection treatment is permitted, nor is the addition of any bacteriostatic agents . [ citation needed ] A review by the World Health Organization found slightly reduced cardiovascular disease mortality from consuming harder water with higher mineral amounts, with magnesium and possibly calcium being the most likely contributors. [ 5 ] However, mineral amounts vary greatly among different brands of mineral water, and tap water can contain similar or greater amounts of minerals. One study found that the median mineral content of North American mineral waters was lower than for tap water, though values varied widely among both groups. [ 6 ] Additionally, other dietary sources of minerals are available and may be more cost effective and less environmentally impactful than bottled mineral water. Kozisek, Frantisek; Rosborg, Ingegerd, eds. (2020). Drinking Water Minerals and Mineral Balance Importance, Health Significance, Safety Precautions . Springer International Publishing . ISBN 9783030180348 .
https://en.wikipedia.org/wiki/Mineral_water
Mineral waters of Nakhchivan are water springs in Nakhchivan Autonomous Republic that contain various minerals such as, sodium , potassium , magnesium , chloride and sulfur compounds. [ 1 ] [ 2 ] There are approximately seven thousand artesian aquifers and more than two hundred mineral water springs in Nakhchivan. [ 1 ] Mineral water springs of Nakhchivan cover about 60% of overall water sources of Azerbaijan. Sirab , Badamli , Vaykhir, Gulustan and Daridagh are considered as the most popular water sources in Nakhchivan. [ 3 ] They are used as a treatment and potable water sources. Research works of the mineral waters in this territory were started in the 1840s and centralized exploration works carried out there in the twentieth century. According to the investigations, there are six types, sixteen categories and thirty-three spices of mineral waters in Nakhchivan and 98% of them contain Carbon dioxide . The majority of mineral waters’ temperature is 8 °C - 22 °C. There are explored waters with 50 °C and more in Sirab and Daridagh springs. 35% of Carbon dioxide abundant mineral waters of the country are situated in Nakhchivan Autonomous Republic. [ 4 ] Sirab mineral water spring located in Babek district (18 kilometers north – east of Nakhchivan city) and it is 1100-meter-high from sea level in Sirab village. The origin of the word “sirab” consists of two parts “sir” and “ab” means “secret water”. There are differentiated three types of water in this spring regarding to their compositions. Sirab mineral water is used as a treatment for diseases such as, liver and gastroenterostomy oriented. [ 4 ] This mineral water exported to the countries such as, Russia , China , Turkey , Belarus , Kazakhstan , Turkmenistan , Iran , Iraq , Poland , Ukraine , Qatar , and Baltic states from Azerbaijan . [ 5 ] Source of the Badamli mineral water is located 1274 meters high from the sea level in Shahbuz district (three kilometers south – west of the Badamli village) and consists of the several springs. The chemical composition of the water consists of minerals such as, Carbon Dioxide, Hydro carbonate – chloride and sodium – potassium. Badamli is Narzan (Kislovodsk) and Saqveri (Georgia) typed healing – beverage water. [ 6 ] [ 7 ] [ 8 ] Daridagh mineral water located in Culfa district and consists of five springs and 32 exploration wells. The source of the water is 800–900 meters high from the sea level. According to the chemical composition Daridagh mineral water is arsenic and highly mineralized chloride – hydro carbonate – sodium abundant, and close to the Kudova ( Poland ), La – Burbula ( France ), Durkheim ( Germany ) and Sinegorsk (Russia) mineral waters. [ 4 ] Mineral water Vaykhir is located 1100 meters high from the sea level in Babek district. Vaykhir mineral water consists of a number of springs and two types of waters were found from the central well. Vaykhir mineral water is beneficial for the treatment of diseases such as, hepatitis , inflammation of gallbladder , chronic gastritis and chronic colitis . [ 4 ] [ 6 ] Gulustan mineral water is Sirab and Kislovodsk typed and located in Culfa district. The well was dug in 1962 and is more than 130 meters deep. Chemical composition consists of magnesium, sodium, potassium, carbonate and carbon dioxide and mostly used as a treatment for gastroenterostomy diseases. [ 6 ] Batabat mineral water is located 1700 high from the sea level in the north – east of the Nakhchivan city. Magnesium, hydro carbonated, sodium – potassium abundant water is beneficial for gastroentric diseases. [ 4 ]
https://en.wikipedia.org/wiki/Mineral_waters_of_Nakhchivan
Mineralized tissues are biological tissues that incorporate minerals into soft matrices. Typically these tissues form a protective shield or structural support. [ 1 ] Bone, mollusc shells , deep sea sponge Euplectella species, radiolarians , diatoms , antler bone, tendon , cartilage , tooth enamel and dentin are some examples of mineralized tissues. [ 1 ] [ 2 ] [ 3 ] [ 4 ] These tissues have been finely tuned to enhance their mechanical capabilities over millions of years of evolution. Thus, mineralized tissues have been the subject of many studies since there is a lot to learn from nature as seen from the growing field of biomimetics . [ 2 ] The remarkable structural organization and engineering properties makes these tissues desirable candidates for duplication by artificial means. [ 1 ] [ 2 ] [ 4 ] Mineralized tissues inspire miniaturization, adaptability and multifunctionality. While natural materials are made up of a limited number of components, a larger variety of material chemistries can be used to simulate the same properties in engineering applications. However, the success of biomimetics lies in fully grasping the performance and mechanics of these biological hard tissues before swapping the natural components with artificial materials for engineering design. [ 2 ] Mineralized tissues combine stiffness, low weight, strength and toughness due to the presence of minerals (the inorganic part) in soft protein networks and tissues (the organic part). [ 1 ] [ 2 ] There are approximately 60 different minerals generated through biological processes, but the most common ones are calcium carbonate found in mollusk shells and hydroxyapatite present in teeth and bones. [ 2 ] Although one might think that the mineral content of these tissues can make them fragile, studies have shown that mineralized tissues are 1,000 to 10,000 times tougher than the minerals they contain. [ 2 ] [ 5 ] The secret to this underlying strength is in the organized layering of the tissue. Due to this layering, loads and stresses are transferred throughout several length-scales, from macro to micro to nano, which results in the dissipation of energy within the arrangement. These scales or hierarchical structures are therefore able to distribute damage and resist cracking. [ 2 ] Two types of biological tissues have been the target of extensive investigation, namely nacre from mollusk shells and bone, which are both high performance natural composites. [ 2 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] Many mechanical and imaging techniques such as nanoindentation and atomic force microscopy are used to characterize these tissues. [ 10 ] [ 11 ] Although the degree of efficiency of biological hard tissues are yet unmatched by any man-made ceramic composites, some promising new techniques to synthesize them are currently under development. [ 1 ] [ 2 ] Not all mineralized tissues develop through normal physiologic processes and are beneficial to the organism. For example, kidney stones contain mineralized tissues that are developed through pathologic processes. Hence, biomineralization is an important process to understand how these diseases occur. [ 3 ] The evolution of mineralized tissues has been puzzling for more than a century. It has been hypothesized that the first mechanism of animal tissue mineralization began either in the oral skeleton of conodont or the dermal skeleton of early agnathans . The dermal skeleton is just surface dentin and basal bone, which is sometimes overlaid by enameloid. It is thought that the dermal skeleton eventually became scales, which are homologous to teeth. Teeth were first seen in chondrichthyans and were made from all three components of the dermal skeleton, namely dentin, basal bone and enameloid. The mineralization mechanism of mammalian tissue was later elaborated in actinopterygians and sarcopterygians during bony fish evolution. It is expected that genetic analysis of agnathans will provide more insight into the evolution of mineralized tissues and clarify evidence from early fossil records. [ 12 ] Hierarchical structures are distinct features seen throughout different length scales. [ 1 ] To understand how the hierarchical structure of mineralized tissues contributes to their remarkable properties, those for nacre and bone are described below. [ 13 ] Hierarchical structures are characteristic of biology and are seen in all structural materials in biology such as bone [ 14 ] and nacre from seashells [ 15 ] Nacre has several hierarchical structural levels. [ 13 ] Some mollusc shells protect themselves from predators by using a two layered system, one of which is nacre. [ 2 ] [ 13 ] Nacre constitutes the inner layer while the other, outer, layer is made from calcite . [ 2 ] [ 13 ] The latter is hard and thus prevents any penetration through the shell, but is subject to brittle failure. On the other hand, nacre is softer and can uphold inelastic deformations, which makes it tougher than the hard outer shell. [ 13 ] The mineral found in nacre is aragonite , CaCO 3 , and it occupies 95% vol. Nacre is 3000 times tougher than aragonite and this has to do with the other component in nacre, the one that takes up 5% vol., which is the softer organic biopolymers. [ 1 ] Furthermore, the nacreous layer also contains some strands of weaker material called growth lines that can deflect cracks. [ 1 ] [ 2 ] The Microscale can be imagined by a three-dimensional brick and mortar wall. The bricks would be 0.5 μm thick layers of microscopic aragonite polygonal tablets approximately 5-8 μm in diameter. What holds the bricks together are the mortars and in the case of nacre, it is the 20-30 nm organic material that plays this role. [ 1 ] Even though these tablets are usually illustrated as flat sheets, different microscopy techniques have shown that they are wavy in nature with amplitudes as large as half of the tablet's thickness. [ 1 ] [ 2 ] This waviness plays an important role in the fracture of nacre as it will progressively lock the tablets when they are pulled apart and induce hardening. [ 2 ] The 30 nm thick interface between the tablets that connects them together and the aragonite grains detected by scanning electron microscopy from which the tablets themselves are made of together represent another structural level. The organic material “gluing” the tablets together is made of proteins and chitin . [ 1 ] To summarize, on the macroscale, the shell, its two layers ( nacre and calcite ), and weaker strands inside nacre represent three hierarchical structures. On the microscale, the stacked tablet layers and the wavy interface between them are two other hierarchical structures. Lastly, on the nanoscale, the connecting organic material between the tablets as well as the grains from which they are made of is the final sixth hierarchical structure in nacre. [ 2 ] Like nacre and the other mineralized tissues, bone has a hierarchical structure that is also formed by the self-assembly of smaller components. The mineral in bone (known as bone mineral ) is hydroxyapatite with a lot of carbonate ions, while the organic portion is made mostly of collagen and some other proteins. The hierarchical structural of bone spans across to a three tiered hierarchy of the collagen molecule itself. [ 14 ] Different sources report different numbers of hierarchical level in bone, which is a complex biological material. [ 1 ] [ 2 ] [ 16 ] The types of mechanisms that operate at different structural length scales are yet to be properly defined. [ 1 ] Five hierarchical structures of bone are presented below. [ 16 ] Compact bone and spongy bone are on a scale of several millimetres to 1 or more centimetres. [ 16 ] There are two hierarchical structures on the microscale. The first, at a scale of 100 μm to 1 mm, is inside the compact bone where cylindrical units called osteons and small struts can be distinguished. [ 16 ] The second hierarchical structure, the ultrastructure, at a scale of 5 to 10 μm, is the actual structure of the osteons and small struts. [ 16 ] There are also two hierarchical structures on the nanoscale. The first being the structure inside the ultrastructure that are fibrils and extrafibrillar space, at a scale of several hundred nanometres. The second are the elementary components of mineralized tissues at a scale of tens of nanometres. The components are the mineral crystals of hydroxyapatite , cylindrical collagen molecules, organic molecules such as lipids and proteins, and finally water. [ 16 ] The hierarchical structure common to all mineralized tissues is the key to their mechanical performance. [ 1 ] [ 2 ] The mineral is the inorganic component of mineralized tissues. This constituent is what makes the tissues harder and stiffer. [ 1 ] [ 2 ] Hydroxyapatite , calcium carbonate , silica , calcium oxalate , whitlockite , and monosodium urate are examples of minerals found in biological tissues. [ 2 ] [ 3 ] In mollusc shells, these minerals are carried to the site of mineralization in vesicles within specialized cells. Although they are in an amorphous mineral phase while inside the vesicles , the mineral destabilizes as it passes out of the cell and crystallizes. [ 17 ] In bone, studies have shown that calcium phosphate nucleates within the hole area of the collagen fibrils and then grows in these zones until it occupies the maximum space. [ 8 ] The organic part of mineralized tissues is made of proteins. [ 1 ] In bone for example, the organic layer is the protein collagen. [ 3 ] The degree of mineral in mineralized tissues varies and the organic component occupies a smaller volume as tissue hardness increases. [ 1 ] [ 18 ] However, without this organic portion, the biological material would be brittle and break easily. [ 1 ] [ 2 ] Hence, the organic component of mineralized tissues increases their toughness . [ 19 ] Moreover, many proteins are regulators in the mineralization process. They act in the nucleation or inhibition of hydroxyapatite formation. For example, the organic component in nacre is known to restrict the growth of aragonite. Some of the regulatory proteins in mineralized tissues are osteonectin , osteopontin , osteocalcin , bone sialoprotein and dentin phosphophoryn . [ 20 ] In nacre, the organic component is porous, which allows the formation of mineral bridges responsible for the growth and order of the nacreous tablets. [ 19 ] Understanding the formation of biological tissues is inevitable in order to properly reconstruct them artificially. Even if questions remain in some aspects and the mechanism of mineralization of many mineralized tissues need yet to be determined, there are some ideas about those of mollusc shell, bone and sea urchin. [ 17 ] The main structural elements involved in the mollusk shell formation process are: a hydrophobic silk gel, aspartic acid rich protein, and a chitin support. The silk gel is part of the protein portion and is mainly composed of glycine and alanine . It is not an ordered structure. The acidic proteins play a role in the configuration of the sheets. The chitin is highly ordered and is the framework of the matrix. The main elements of the overall are: [ 17 ] In bone, mineralization starts from a heterogeneous solution having calcium and phosphate ions. The mineral nucleates, inside the hole area of the collagen fibrils, as thin layers of calcium phosphate , which then grow to occupy the maximum space available there. The mechanisms of mineral deposition within the organic portion of the bone are still under investigation. Three possible suggestions are that nucleation is either due to the precipitation of calcium phosphate solution, caused by the removal of biological inhibitors or occurs because of the interaction of calcium-binding proteins. [ 8 ] The sea urchin embryo has been used extensively in developmental biology studies. The larvae form a sophisticated endoskeleton that is made of two spicules . Each of the spicules is a single crystal of mineral calcite . The latter is a result of the transformation of amorphous CaCO 3 to a more stable form. Therefore, there are two mineral phases in larval spicule formation. [ 21 ] The mineral-protein interface with its underlying adhesion forces is involved in the toughening properties of mineralized tissues. The interaction in the organic-inorganic interface is important to understand these toughening properties. [ 22 ] At the interface, a very large force (>6-5 nN) is needed to pull the protein molecules away from the aragonite mineral in nacre, despite the fact that the molecular interactions are non-bonded. [ 22 ] Some studies perform a finite element model analysis to investigate the behaviour of the interface. [ 7 ] [ 23 ] A model has shown that during tension, the back stress that is induced during the plastic stretch of the material plays a big role in the hardening of the mineralized tissue. As well, the nanoscale asperities that is on the tablet surfaces provide resistance to interlamellar sliding and so strengthen the material. A surface topology study has shown that progressive tablet locking and hardening, which are needed for spreading large deformations over large volumes, occurred because of the waviness of the tablets. [ 23 ] In vertebrates , mineralized tissues not only develop through normal physiological processes, but can also be involved in pathological processes. Some diseased areas that include the appearance of mineralized tissues include atherosclerotic plaques, [ 24 ] [ 25 ] tumoral calcinosis , juvenile dermatomyositis , kidney and salivary stones . All physiologic deposits contain the mineral hydroxyapatite or one analogous to it. Imaging techniques such as infrared spectroscopy are used to provide information on the type of mineral phase and changes in mineral and matrix composition involved in the disease. [ 3 ] Also, clastic cells are cells that cause mineralized tissue resorption . If there is an unbalance of clastic cell, this will disrupt resorptive activity and cause diseases. One of the studies involving mineralized tissues in dentistry is on the mineral phase of dentin in order to understand its alteration with aging. These alterations lead to “transparent” dentin, which is also called sclerotic. It was shown that a ‘‘dissolution and reprecipitation’’ mechanism reigns the formation of transparent dentin. [ 26 ] The causes and cures of these conditions can possibly be found from further studies on the role of the mineralized tissues involved. Natural structural materials comprising hard and soft phases arranged in elegant hierarchical multiscale architectures, usually exhibit a combination of superior mechanical properties . For instance, many natural mechanical materials ( Bone , Nacre , Teeth , Silk , and Bamboo ) are lightweight, strong, flexible, tough, fracture-resistant, and self-repair. The general underlying mechanism behind such advanced materials is that the highly oriented stiff components give the materials great mechanical strength and stiffness , while the soft matrix “glues” the stiff components and transfer the stress to them. Moreover, the controlled plastic deformation of the soft matrix during fracture provides an additional toughening mechanism. Such a common strategy was perfected by nature itself over millions of years of evolution, giving us the inspiration for building the next generation of structural materials. There are several techniques used to mimic these tissues. Some of the current techniques are described here. [ 1 ] [ 27 ] The large scale model of materials is based on the fact that crack deflection is an important toughening mechanism of nacre. This deflection happens because of the weak interfaces between the aragonite tiles. Systems on the macroscopic scales are used to imitate these week interfaces with layered composite ceramic tablets that are held together by weak interface “glue”. Hence, these large scale models can overcome the brittleness of ceramics. Since other mechanisms like tablet locking and damage spreading also play a role in the toughness of nacre, other models assemblies inspired by the waviness of microstructure of nacre have also been devised on the large scale. [ 1 ] All hard materials in animals are achieved by the biomineralization process - dedicated cells deposit minerals to a soft polymeric (protein) matrix to strengthen, harden and/or stiffen it. Thus, biomimetic mineralization is an obvious and effective process for building synthetic materials with superior mechanical properties. The general strategy is started with organic scaffolds with ion-binding sites that promote heterogeneous nucleation. Then localized mineralization can be achieved by controlled ion supersaturation on these ion-binding sites. In such a composite material , mineral function as a highly strong and highly wear- and erosion-resistant surface layer. While the soft organic scaffolds provide a tough load-bearing base to accommodate excessive strains. Ice temptation/ Freeze casting is a new method that uses the physics of ice formation to develop a layered-hybrid material. Specifically, ceramic suspensions are directionally frozen under conditions designed to promote the formation of lamellar ice crystals , which expel the ceramic particles as they grow. After sublimation of the water, this results in a layered homogeneous ceramic scaffold that, architecturally, is a negative replica of the ice. The scaffold can then be filled with a second soft phase so as to create a hard–soft layered composite. This strategy is also widely applied to build other kinds of bioinspired materials, like extremely strong and tough hydrogels , [ 28 ] metal/ceramic, and polymer/ceramic hybrid biomimetic materials with fine lamellar or brick-and-mortar architectures. The "brick" layer is extremely strong but brittle and the soft "mortar" layer between the bricks generates limited deformation, thereby allowing for the relief of locally high stresses while also providing ductility without too much loss in strength. Additive manufacturing encompasses a family of technologies that draw on computer designs to build structures layer by layer. [ 29 ] Recently, a lot of bioinspired materials with elegant hierarchical motifs have been built with features ranging in size from tens of micrometers to one submicrometer. Therefore, the crack of materials only can happen and propagate on the microscopic scale, which wouldn't lead to the fracture of the whole structure. However, the time-consuming of manufacturing the hierarchical mechanical materials, especially on the nano- and micro-scale limited the further application of this technique in large-scale manufacturing. Layer-by-layer deposition is a technique that as suggested by its name consists of a layer-by-layer assembly to make multilayered composites like nacre. Some examples of efforts in this direction include alternating layers of hard and soft components of TiN/Pt with an ion beam system. The composites made by this sequential deposition technique do not have a segmented layered microstructure. Thus, sequential adsorption has been proposed to overcome this limitation and consists of repeatedly adsorbing electrolytes and rinsing the tablets, which results in multilayers. [ 1 ] Thin film deposition focuses on reproducing the cross-lamellar microstructure of conch instead of mimicking the layered structure of nacre using micro-electro mechanical systems (MEMS) . Among mollusk shells, the conch shell has the highest degree of structural organization. The mineral aragonite and organic matrix are replaced by polysilicon and photoresist . The MEMS technology repeatedly deposits a thin silicon film. The interfaces are etched by reactive ion etching and then filled with photoresist . There are three films deposited consecutively. Although the MEMS technology is expensive and more time-consuming, there is a high degree of control over the morphology and large numbers of specimens can be made. [ 1 ] The method of self-assembly tries to reproduce not only the properties, but also the processing of bioceramics . In this process, raw materials readily available in nature are used to achieve stringent control of nucleation and growth. This nucleation occurs on a synthetic surface with some success. The technique occurs at low temperature and in an aqueous environment. Self-assembling films form templates that effect the nucleation of ceramic phases. The downside with this technique is its inability to form a segmented layered microstructure. Segmentation is an important property of nacre used for crack deflection of the ceramic phase without fracturing it. As a consequence, this technique does not mimic microstructural characteristics of nacre beyond the layered organic/inorganic layered structure and requires further investigation. [ 1 ] The various studies have increased progress towards understanding mineralized tissues. However, it is still unclear which micro/nanostructural features are essential to the material performance of these tissues. Also constitutive laws along various loading paths of the materials are currently unavailable. For nacre, the role of some nanograins and mineral bridges requires further studies to be fully defined. Successful biomimicking of mollusk shells will depend will on gaining further knowledge of all these factors, especially the selection of influential materials in the performance of mineralized tissues. Also the final technique used for artificial reproduction must be both cost effective and scalable industrially. [ 1 ]
https://en.wikipedia.org/wiki/Mineralized_tissues
The purpose of a mineralizer is to facilitate the transport of insoluble “nutrient” to a seed crystal by means of a reversible chemical reaction . Over time, the seed crystal accumulates the material that was once in the nutrient and grows. Mineralizers are additives that aid the solubilization of the nutrient solid. When used in small quantities, mineralizers function as catalysts. Typically, a more stable solid is crystallized from a solution that consists of a less stable solid and a solvent. The process is done by dissolution-precipitation or crystallization process. [ 1 ] Hydrothermal growth involves the crystallization of a dissolved solid at elevated temperatures. Often high pressures are involved. Historically, the goal of hydrothermal growth was to grow large crystals. Due to the recent developments in nanotechnology , small nanocrystals are now desired and made by hydrothermal growth with crystal size controlled by mineralizers. Different mineralizers result in crystals of different sizes and shapes. Typical mineralizers are hydroxides ( NaOH , KOH , LiOH ), carbonates ( Na 2 CO 3 ) and halides ( NaF , KF , LiF , NaCl , KCl , LiCl ). [ 2 ] Although usually the anion of the mineralizer is most active in dissolving the nutrient material, the cation also exerts an influence in some cases. The mineralizer can interact with impurities on the surface of the crystal and increase the growth rate. For example, the growth rate for sapphire (Al 2 O 3 ) and zincite (ZnO) in potassium-containing solution (KOH, K 2 CO 3 ) is higher in comparison to that in sodium-containing solution (NaOH, Na 2 CO 3 ). This difference is not readily understood, but are attributed the interaction between potassium and an impurity absorbed on the surface. [ 1 ] Basic mineralizers such as NaOH or Na 2 CO 3 are used in the hydrothermal growth of quartz crystals. [ 3 ] The precursor or nutrient is crushed silica and a solvent. Typical containers are made of air-tight steel cylinders called autoclaves that can withstand high temperature and pressure. In the case of quartz crystals, the container is heated at 300 °C (which produces a pressure of 140 MPA). Without the mineralizer, higher temperatures are required to solubilize silica. Hydroxides and carbonates make silica more soluble by forming water-soluble sodium silicates. [ 4 ] Simplified equations can be represented as in the equation below Anhydrous sodium silicate is a chain polymeric anion composed of corner shared SiO 4 tetrahedra. Hydrates form with the formula Na 2 SiO 3 •nH 2 O which contain the discrete, approximately tetrahedral anion SiO 2 (OH) 2 2− with waters of hydration . [ 3 ] In three-dimensional silica glass, the addition of sodium ions causes oxygen ions formed a bridge, these oxygen ions possess an effective negative charge. The positively charged sodium ions provide partly covalent and partly ionic structure. As the concentration of Na + increases, ionically bound material link and eventually form a network of continuous channels. [ 5 ] Once silica is solubilized, components in the nutrient are transferred to the seed crystal, which is held at a cooler temperature than the nutrient, resulting in a high purity quartz crystal. Hydroxide mineralizers are also used to control the alumina/silica ratio of zeolites . A typical recipe for the production of a zeolite includes the mineralizer, the solvent, the seed crystal, a nutrient consisting of silica (SiO 2 ) and alumina (Al 2 O 3 ), and a template. Templates are cations that direct the polymerization of the anionic building blocks to form a certain zeolite structure. Different templating cations lead to different zeolite structures. Typical templates include tetramethylammonium (TMA), sodium (Na + ) and potassium (K + ). Different zeolites can also be formed by changing the ratios of the nutrient source, the type of mineralizer or the temperature and pH of the reaction. [ 6 ] At high pH, zeolites with high alumina content are formed, because hydroxides prevent the ability of silica to condense and oligomerize through the reaction shown above. At lower pH, zeolites with high silica content are favored. [ 1 ]
https://en.wikipedia.org/wiki/Mineralizer
Minerva Cordero Braña [ 1 ] [ 2 ] is a Puerto Rican mathematician and a professor of mathematics at the University of Texas at Arlington . She is also the university's Senior Associate Dean for the College of Science, where she is responsible for the advancement of the research mission of the college. [ 3 ] President Biden awarded her the Presidential Award for Excellence in Science, Mathematics, and Engineering Mentoring (PAESMEM) on February 8, 2022. [ 4 ] Cordero was born in Bayamón, Puerto Rico . Her mother, whose schooling stopped after the fifth grade, [ 5 ] made education a top priority in the family home. She told her children "the best thing I can give you is an education." Cordero and her siblings would do their homework together and discussed what they learned in school each day. [ 5 ] Cordero said, "We learned each other's subjects." Wanting to go to college, Cordero bought herself a college exam preparation book in high school and studied for the college-entrance exam. [ 5 ] [ 6 ] She states that her exam scores were the highest scores for her high school, Miguel Melendez Munoz High School. [ 6 ] Cordero attended the Universidad de Puerto Rico in Rio Piedras and received her B.S. in Mathematics in 1981. She was granted a National Science Foundation Minority Graduate Fellowship which she used to attend the University of California at Berkeley to obtain her masters in mathematics in 1983. She continued her studies at the University of Iowa , and obtained her Ph.D. in mathematics in 1989 under Norman Johnson. [ 7 ] [ 6 ] Cordero's research is in the area of finite semifields ( non-associative algebras ) and their associated planes (viewed affinely or projectively) in the general area of finite geometry . [ 8 ] After earning her Ph.D., Cordero worked as an associate and an assistant professor at Texas Tech University until 2001, when she joined the faculty at the University of Texas at Arlington . [ 3 ] Cordero served as the Mathematical Association of America 's Governor-at-Large for Minority Interests from 2008 to 2011. [ 9 ] [ 10 ] Cordero's most-cited work is A survey of finite semifields . [ 11 ] She was the Principal Investigator for a National Science Foundation grant of $2.85 million awarded to the University of Texas at Arlington in 2009 for a project that placed mathematics graduate students in Arlington public schools to enhance teaching and learning in the classrooms and to inspire students to pursue careers in STEM (Science, Technology, Engineering, and Mathematics). [ 12 ]
https://en.wikipedia.org/wiki/Minerva_Cordero
MINFLUX , or minimal fluorescence photon fluxes microscopy, is a super-resolution light microscopy method that images and tracks objects in two and three dimensions with single-digit nanometer resolution. [ 1 ] [ 2 ] [ 3 ] MINFLUX uses a structured excitation beam with at least one intensity minimum – typically a doughnut-shaped beam with a central intensity zero – to elicit photon emission from a fluorophore. The position of the excitation beam is controlled with sub-nanometer precision, and when the intensity zero is positioned exactly on the fluorophore, the system records no emission. Thus, the system requires few emitted photons to determine the fluorophore's location with high precision. In practice, overlapping the intensity zero and the fluorophore would require a priori location knowledge to position the beam. As this is not the case, the excitation beam is moved around in a defined pattern to probe the emission from the fluorophore near the intensity minimum. [ 1 ] Each localization takes less than 5 microseconds, [ 1 ] so MINFLUX can construct images of nanometric structures or track single molecules in fixed and live specimens by pooling the locations of fluorescent labels. Because the goal is to locate the point where a fluorophore stops emitting, MINFLUX significantly reduces the number of fluorescence photons needed for localization compared to other methods. [ 2 ] [ 4 ] A commercial MINFLUX system is available from abberior instruments GmbH. [ 5 ] MINFLUX overcomes the Abbe diffraction limit in light microscopy and distinguishes individual fluorescing molecules by leveraging the photophysical properties of fluorophores. The system temporarily silences (sets in an OFF-state) all but one molecule within a diffraction-limited area (DLA) and then locates that single active (in an ON-state) molecule. [ 1 ] Super-resolution microscopy techniques like stochastic optical reconstruction microscopy (STORM) and photoactivated localization microscopy (PALM) do the same. [ 6 ] However, MINFLUX differs in how it determines the molecule’s location. The excitation beam used in MINFLUX has a local intensity minimum or intensity zero. The position of this intensity zero in a sample is adjusted via control electronics and actuators with sub-nanometer spatial and sub-microsecond temporal precision. When the active molecule located at r → m {\displaystyle {\vec {r}}_{m}} is in a non-zero intensity area of the excitation beam, it fluoresces. The number of photons n {\displaystyle n} emitted by the active molecule is proportional to the excitation beam intensity at that position. In the vicinity of the excitation beam intensity zero, the intensity I {\displaystyle I} of the emission from the active molecule when the intensity zero is located at position r → {\displaystyle {\vec {r}}} can be approximated by a quadratic function. Therefore, the recorded number of emission photons is: n ( r → , r → m ) = c I = c ( r → − r → m ) 2 {\displaystyle n({\vec {r}},{\vec {r}}_{m})=cI=c({\vec {r}}-{\vec {r}}_{m})^{2}} where c {\displaystyle c} is a measure of the collection efficiency of detection, the absorption cross-section of the emitter, and the quantum yield of fluorescence. In other words, photon fluxes emitted by the active molecule when it is located close to the zero-intensity point of the excitation beam carry information about its distance to the center of the beam. That information can be used to find the position of the active molecule. The position is probed with a set of K {\displaystyle K} excitation intensities { I 0 , . . . , I K − 1 } {\displaystyle \{I_{0},...,I_{K-1}\}} . For example, the active molecule is excited with the same doughnut-shaped beam moved to different positions. The probing results in a corresponding set of photon counts { n 0 , . . . , n K − 1 } {\displaystyle \{n_{0},...,n_{K-1}\}} . These photon counts are probabilistic; each time such a set is measured, the result is a different realization of photon numbers fluctuating around a mean value. Since their distribution follows Poissonian statistics, the expected position of the active molecule can be estimated from the photon numbers, using, for example, a maximum likelihood estimation of the form: r → m ^ = a r g m a x L ( r → ∣ { n 0 , . . . , n K − 1 } ) {\displaystyle {\widehat {{\vec {r}}_{m}}}=argmax{\mathcal {L}}({\vec {r}}\mid \{n_{0},...,n_{K-1}\})} The position r → m ^ {\displaystyle {\widehat {{\vec {r}}_{m}}}} maximizes the likelihood that the measured set of photon counts occurred exactly as recorded and is thus, an estimate of the active molecule’s location. [ 7 ] Recordings of the emitting active molecule at two different excitation beam positions are needed to use the quadratic approximation in the one-dimensional basic principle described above. Each recording provides a one-dimensional distance value to the center of the excitation beam. In two dimensions, at least three recording points are needed to ascertain a location that can be used to move the MINFLUX excitation beam toward the target molecule. These recording points demarcate a probing area L . Balzarotti et al. [ 1 ] use the Cramér-Rao limit to show that constricting this probing area significantly improves localization precision, more so than increasing the number of emitted photons: σ B ∝ L N {\displaystyle \sigma _{B}\propto {\frac {L}{\sqrt {N}}}} where σ B {\displaystyle \sigma _{B}} is the Cramér-Rao limit, L {\displaystyle L} is the diameter of the probing area, and N {\displaystyle N} is the number of emitted photons. MINFLUX takes advantage of this feature when localizing an active fluorophore. It records photon fluxes using a probing scheme of at least three recording points around the probing area L {\displaystyle L} and one point at the center. These fluxes differ at each recording point as the active molecule is excited by different light intensities. Those flux patterns inform the repositioning of the probing area to center on the active molecule. Then the probing process is repeated. With each probing iteration, MINFLUX constricts the probing area L {\displaystyle L} , narrowing the space where the active molecule can be located. Thus, the distance remaining between the intensity zero and the active molecule is determined more precisely at each iteration. The steadily improving positional information minimizes the number of fluorescence photons and the time that MINFLUX needs to achieve precise localizations. [ 8 ] By pooling the determined locations of multiple fluorescent molecules in a specimen, MINFLUX generates images of nanoscopic structures with a resolution of 1–3 nm. [ 9 ] MINFLUX has been used to image DNA origami [ 1 ] [ 10 ] and the nuclear pore complex [ 11 ] and to elucidate the architecture of subcellular structures in mitochondria and photoreceptors. [ 12 ] [ 13 ] Because MINFLUX does not collect large numbers of photons emitted from target molecules, localization is faster than with conventional camera-based systems. [ 14 ] Thus, MINFLUX can iteratively localize the same molecule at microsecond intervals over a defined period. MINFLUX has been used to track the movement of the motor protein kinesin-1, both in vitro and in vivo , [ 15 ] [ 16 ] and to monitor configurational changes of the mechanosensitive ion channel PIEZO1. [ 17 ]
https://en.wikipedia.org/wiki/Minflux
In the field of ordinary differential equations , the Mingarelli identity [ 1 ] is a theorem that provides criteria for the oscillation and non-oscillation of solutions of some linear differential equations in the real domain. It extends the Picone identity from two to three or more differential equations of the second order. Consider the n solutions of the following (uncoupled) system of second order linear differential equations over the t –interval [ a , b ] : Let Δ {\displaystyle \Delta } denote the forward difference operator, i.e. The second order difference operator is found by iterating the first order operator as in with a similar definition for the higher iterates. Leaving out the independent variable t for convenience, and assuming the x i ( t ) ≠ 0 on ( a , b ] , there holds the identity, [ 2 ] where When n = 2 this equality reduces to the Picone identity . The above identity leads quickly to the following comparison theorem for three linear differential equations, [ 3 ] which extends the classical Sturm–Picone comparison theorem . Let p i , q i i = 1, 2, 3 , be real-valued continuous functions on the interval [ a , b ] and let be three homogeneous linear second order differential equations in self-adjoint form , where Assume that for all t in [ a , b ] we have, Then, if x 1 ( t ) > 0 on [ a , b ] and x 2 ( b ) = 0 , then any solution x 3 ( t ) has at least one zero in [ a , b ] .
https://en.wikipedia.org/wiki/Mingarelli_identity
Mini-Reviews in Medicinal Chemistry is a monthly peer-reviewed medical journal covering all aspects of medicinal chemistry . [ 1 ] It is published by Bentham Science Publishers and the editors-in-chief are Atta-ur-Rahman ( University of Cambridge ), M. Iqbal Choudhary ( University of Karachi ), and George Perry ( University of Texas at San Antonio ). The journal is abstracted and indexed in: According to the Journal Citation Reports , the journal has a 2020 impact factor of 3.862. [ 2 ] [ 3 ] Mini-Reviews in Medicinal Chemistry employs peer review , [ 4 ] however, several scientists have raised concerns about whether it is a predatory journal after being invited to review articles or serve as an editor in areas where they have no scientific expertise. [ 5 ] [ 6 ] This article about a medicinal chemistry journal is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
https://en.wikipedia.org/wiki/Mini-Reviews_in_Medicinal_Chemistry
Mini-puberty is a transient hormonal activation of the hypothalamic-pituitary-gonadal (HPG) axis that occurs in infants shortly after birth. This period is characterized by a surge in the secretion of gonadotropins ( LH and FSH ) and sex steroids ( testosterone in males and estradiol in females), similar to but less intense than the hormonal changes that occur in puberty during adolescence. Mini-puberty plays a crucial role in the early development of the reproductive system and the establishment of secondary sexual characteristics . Mini-puberty begins within the first few days or weeks of life and typically lasts until 6–12 months of age. [ 1 ] The HPG axis is temporarily reactivated, resulting in increased secretion of gonadotropin-releasing hormone (GnRH) from the hypothalamus . GnRH stimulates the pituitary gland to release luteinizing hormone (LH) and follicle-stimulating hormone (FSH) , which in turn stimulate the gonads ( testes in males and ovaries in females) to produce sex steroids. Mini-puberty is crucial for several developmental processes, including: Mini-puberty can serve as a valuable diagnostic window for identifying congenital abnormalities of the HPG axis or gonads. [ 4 ] [ 5 ] Conditions such as congenital hypogonadotropic hypogonadism and certain forms of intersex can be diagnosed during this period by evaluating hormone levels and gonadal response. Disruptions in the mini-puberty process can lead to various clinical conditions, including: Environmental factors, such as exposure to Endocrine Disrupting Chemicals (EDCs) , have been shown to impact mini-puberty. [ 1 ] [ 8 ] [ 9 ] EDCs are widespread in daily life and can be found in products such as pesticides and personal care items. Bisphenol A (BPA) [ 10 ] and many phthalates [ 11 ] are known to interfere with the earlier HPG axis activation during pregnancy for boys, affecting testosterone levels during mini-puberty, anogenital distance (AGD) , and testicular descent . More recently, BPA and phthalate exposure during mini-puberty have been shown to interfere with HPG axis activation and testosterone levels during that same time frame, suggesting that mini-puberty is a particularly vulnerable window for EDC exposure. [ 12 ] Such disruptions may lead to long-term consequences, including delayed or precocious puberty, reproductive health issues, and increased risk of conditions like polycystic ovary syndrome (PCOS) , [ 13 ] breast cancer [ 14 ] and prostate cancer . In a small study, it was shown that "PCDD/Fs and PCBs measured in breast milk collected within the first 3 weeks following birth were more strongly associated with sexually dimorphic outcomes than exposures measured in maternal blood collected between weeks 28 and 43" of pregnancy, [ 9 ] adding evidence that EDC exposure during mini-puberty may interfere with endocrine and neurological development. Although the phenomenon has been known for over 40 years, [ 2 ] research into mini-puberty continues to uncover its broader implications for long-term health and development. The potential impact of environmental factors and endocrine disruptors on mini-puberty is an area of active investigation. At the same time, researchers also investigate if mini-puberty may be a window to treat certain disorders, e.g. treating micropenis using gonadotropin (testosterone) injections. [ 15 ]
https://en.wikipedia.org/wiki/Mini-puberty
A miniature effect is a special effect created for motion pictures and television programs using scale models . Scale models are often combined with high speed photography or matte shots to make gravitational and other effects appear convincing to the viewer. The use of miniatures has largely been superseded by computer-generated imagery in contemporary cinema. Where a miniature appears in the foreground of a shot, this is often very close to the camera lens — for example when matte-painted backgrounds are used. Since the exposure is set to the object being filmed so the actors appear well-lit, the miniature must be over-lit in order to balance the exposure and eliminate any depth of field differences that would otherwise be visible. This foreground miniature usage is referred to as forced perspective . Another form of miniature effect uses stop motion animation . The use of scale models in the creation of visual effects by the entertainment industry dates back to the earliest days of cinema. Models and miniatures are copies of people, animals, buildings, settings, and objects. Miniatures or models are used to represent things that do not really exist, or that are too expensive or difficult to film in reality, such as explosions, floods, or fires. [ 1 ] French director Georges Méliès incorporated special effects in his 1902 film Le Voyage dans la Lune ( A Trip to the Moon ) — including double-exposure, split screens, miniatures and stop-action. [ 2 ] Some of the most influential visual effects films of these early years such as Metropolis (1927), Citizen Kane (1941), Godzilla (1954) The Ten Commandments (1956). [ 3 ] The 1933 film King Kong made extensive use of miniature effects including scale models and stop-motion animation of miniature elements. The use of miniatures in 2001: A Space Odyssey [ 4 ] was a major development. In production for three years, the film was a significant advancement in creating convincing models. In the early 1970s, miniatures were often used to depict disasters in such films as The Poseidon Adventure (1972), Earthquake (1974) and The Towering Inferno (1974). The resurgence of the science fiction genre in film in the late 1970s saw miniature fabrication rise to new heights in such films as Close Encounters of the Third Kind , (1977), Star Wars (also 1977), Alien (1979), Star Trek: The Motion Picture (1979) and Blade Runner (1982). Iconic film sequences such as the tanker truck explosion from The Terminator (1984) and the bridge destruction in True Lies (1994) were achieved through the use of large-scale miniatures. The release of Jurassic Park (1993) was a turning point in the use of computers to create effects for which physical miniatures would have previously been employed. While the use of computer-generated imagery (CGI) has largely overtaken their use since then, they are still often employed, especially for projects requiring physical interaction with fire, explosions, or water. [ 5 ] Independence Day (1996), Titanic (1997), Godzilla (1998), the Star Wars prequel trilogy [ 6 ] [ 7 ] [ 8 ] (1999–2005), The Lord of the Rings trilogy (2001–2003), Casino Royale (2006), The Dark Knight [ 9 ] (2008), Inception (2010), and Interstellar [ 10 ] (2014) are examples of highly successful films that have utilized miniatures for a significant component of their visual effects work.
https://en.wikipedia.org/wiki/Miniature_effect
Miniature food is a replica of a dish made at a much smaller scale than the original. It may be in the form of an inedible toy or accessory, or an edible foodstuff either made from the same ingredients as the original dish, candy or other substitute and with real working miniature kitchen and cookwares. Miniature food is an example of miniature art . Regular-sized food models first appeared in Uichi , Japan, in 1917, to display previews of food in the windows of restaurants. Businesses that produced and sold the food models were set up by Iwasaki Ryuzo in 1932. Early models of food were made from wax; nowadays, they are mostly made from plastic and polymer clay , a heat-dependable type of clay. [ 1 ] Generally delicate and tiny items are called " kawaii " in Japanese; miniature food is created with the Japanese miniature-art techniques of recent decades. [ 2 ] The creation of miniature food with edible ingredients was popularized by YouTubers Miniature Space and AAAjoken. [ 3 ] In 2015, a report from video-intelligence firm Tubular Labs indicated that these miniature food videos contributed up to 3% of the total views in the food category. [ 4 ] Miniature food can be either edible or inedible . Edible miniature food is made from real ingredients cooked with miniature utensils like tiny woks, pans, and knives. [ 5 ] In order to make the miniature food look more realistic, the ingredients will sometimes vary from the original recipes. The food may not be cooked in any type of ceramic cooker. Miniature stoves powered by candles or small pieces of wood can be used to cook the food. [ 2 ] [ 6 ] Inedible miniature food is made from materials like polymer clay, resin, and chalk pastels. It is more common than edible miniature food because it serves a wider variety of purposes, such as jewelry , handicrafts , and toys . Also, while ingredients used in edible miniature food are limited, there are more options when making inedible miniature food. The food and the utensils are usually made of polymer clay and dry glue . The artists use dedicated modeling tools to mould and shape the food; sometimes utensils such as sewing needles and toothpicks are also seen in the process of moulding and shaping. People can purchase these tiny creations to decorate their households or workplaces , and some buy miniature food as gifts, or to collect. Tomo Tanaka, who lives in Osaka , makes items in 1:12 and 1:24 scale for display. [ 7 ] The YouTube channel Miniature Space uploads videos of making edible miniature meals with ingredients like quail eggs , chicken , and fish . [ 8 ] Caroline McFarlane-Watts of Tall Tales Productions makes miniature items in 1:12 scale for film, TV, display, and collectors, and YouTube videos. [ 9 ] Shay Aaron, a miniature-food jewelry artist, makes jewelry collections with the polymer clay Fimo and other materials, such as metal and paper , in 1:12 scale. [ 10 ]
https://en.wikipedia.org/wiki/Miniature_food
Miniature Hydraulics are copies or models that represent and reproduce regular or standard sized hydraulic systems and components, but in a reduced state, on a small scale, or in a greatly reduced size. True working and functional miniature hydraulics follow the same operating principles and behavioral properties as their standard or regular size hydraulic prototypes , but are done so primarily at reduced sizes and pressures. Although uncommon, miniature hydraulics do exist, and are obtainable through a variety of sources. Miniature hydraulics, mini hydraulics, and micro hydraulics are abbreviated as or otherwise known as M-H. This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Miniature_hydraulics
In miniature wargaming , players enact simulated battles using scale models called miniature models , which can be anywhere from 2 to 54 mm in height, to represent warriors, vehicles, artillery, buildings, and terrain. These models are colloquially referred to as miniatures or minis . Miniature models are commonly made of metal, plastic, or paper. They are used to augment the visual aspects of a game and track position, facing, and line of sight of characters. Miniatures are typically painted and can be artfully sculpted, making them collectible in their own right. Pre-painted plastic figures, such as Clix miniatures produced by WizKids and unpainted plastic figures for Warhammer by Games Workshop , have become popular. The hobby of painting, collecting, and playing with miniatures originated with toy soldiers , though the latter were generally sold pre-painted. Miniature models are derived from toy soldiers which were constructed of a variety of materials, [ 1 ] These toy figures came to be mass produced from tin in late 1700s Germany, where they were called Zinnsoldaten (lit. "tin soldiers"). These early figures were flat models commonly called "flats", and became quite common in western Europe. By the mid 1800s manufactures in several countries were producing 3d miniatures of tin and lead alloys, common called white metal . In 1993, the New York legislature introduced a bill outlawing lead in miniatures, citing public health concerns. Many miniature manufacturers, anticipating that other states would also impose bans, began making figures with lead-free alloys, often at increased prices. [ 2 ] After months of debate and protests by miniature manufacturers and enthusiasts, New York Governor Mario Cuomo signed a bill which exempted miniatures from the state's public health law. [ 3 ] Despite this, most American manufacturers continued to use non-lead alloys. [ 4 ] In the 20th century miniatures would also be manufactured from plastic and composite materials. [ citation needed ] Some wargames use "box miniatures", consisting of card stock folded into simple cuboids with representative art printed on the outside. Other games use 2d cardboard miniatures that are either held in a base or folded into a triangular tent. Historically the size of miniatures was described in absolute scale in various different systems of measurement, most commonly in metric and English units . A 28mm miniature means that the size of the miniature will be 28mm from the feet of the mini to the chosen reference point. The most common miniatures were the 54 mm European miniatures and the 2 1/4" English models which are commonly considered to be 1:32 scale. [ 5 ] Early wargames such as H.G Wells Little Wars used these commonly available miniatures. With metrication in the United Kingdom , United States manufacturers began to use the metric system to describe miniatures, as opposed to the previously popular customary units , so that their table-top wargaming models would be compatible. Today,the scale of a figure is often described in millimeters, for example one of the most common scales is 28 mm. Manufacturers set up a size of the miniature and try to make every miniature similar size or at least have an average with the size they've set up. While a model may be described as 28 mm the actual height of the model may be different. This is because of a number of factors such as manufacturer, model proportion, method of measuring the model, the model's pose, and what sort of man the model is meant to represent. A manufacturer might advertise its figures as 28 mm, but their products may be over 30 mm tall. In 28 mm scale, short characters such as dwarves , hobbits , and goblins might be represented by figures in the 15 to 20 mm range while taller characters like ogres , trolls and dragons would use 30 mm or larger figures. Manufacturer's use of scale is not uniform and can deviate by as much as 30%. [ citation needed ] Some manufacturers measure figure height from the feet to the eyes rather than the top of the head; therefore, a figure that is 30mm to the top of its head could be considered to be a 28mm miniature. Figures of 15 mm, 20 mm, 25 mm, 28 mm, 30 mm, 32 mm, and 35 mm are the most common for role-playing and table-top games. Smaller figures of 2 mm, 6 mm, 10 mm, 15 mm, and 20 mm are used for mass-combat wargames. Large sizes such as 40 mm and 54 mm were popular with wargamers in the past and are still used by painters and collectors. While the large miniatures have become popular again since the late 20th century, they are not as popular as the smaller sizes. In many games there is a definite scale specified for the square grid that the game is played upon. One of the most common is 1 inch represents 5 feet. This specifies an exact scale of 1:60. That implies that a 28 mm tall figurine represents a 1.68 m (5 ft 6 in) person – which is a reasonable number for a modern 50th percentile male (See: Human height ). Another popular scale is 1/72 or 1 inch equals 6 foot which uses 20 mm, to 25 mm miniatures. It is mostly used for historical gaming in part due to a wide selection of 1/72 scale models. Figures are commonly used with a variety of scales. It is not uncommon for there to be a mismatch between the game scale and miniature size. Chainmail used a scale of 1:360, [ 6 ] appropriate to 5 mm miniatures, but was played with 30 mm miniatures, [ 7 ] and the conceit that each figure represented 20 men. In the table below, figure height alone (excluding base thickness) is the feature from which approximate scale is calculated. Scales smaller still are used when the game involves large vehicles (such as starships or battleships). For instance Axis & Allies Naval Miniatures: War at Sea uses 1:1800 scale, and scales down to 1:6000 are seen. There is no equivalent "mm" number for these scales as individual figures would be nearly microscopic and are not used as such in the games. [ citation needed ] A further complication is differing interpretations of body proportions . Many gaming figures are unrealistically bulky for their height, with oversized feet, heads, hands, wrists, and weapons. Making these parts oversized allows for more details to be present in the miniatures. Some of these exaggerations began as concessions to the limitations of primitive mold-making and sculpting techniques, but they have evolved into stylistic conventions. Figurines with these exaggerated features are often referred to as heroic scale . There is a noted tendency in miniature figure manufacture where over time for bigger and bigger figures to be produced. Larger models were easier to produce correctly, especially in the 20th century; bigger details come out better, and larger surfaces are easier to paint. When a company sees that people are still buying the larger models, that's an incentive for them to continue making larger ones. This is a relative scale that compares the size of the model to the size of a real life object. [ 12 ] This ratio will show how many times the model is smaller than the original size. The meaning of 15 mm (for example) is therefore dependent on a defined reference height. Thus 15 mm in the context of a dwarven world where the reference humanoid is 60 inches (152 cm) tall, is not equivalent to 15 mm in the context an NBA model where the reference humanoid is 2 meters tall. Both models can be described as 15 mm, but the real world sizes depend on the size of the reference humanoid. In practice, the reference humanoid is generally assumed to be the idea of the average height of the human male, within a 6-inch (15 cm) interval between 5.5 and 6 feet (168 and 183 cm), unless otherwise indicated by the designer. [ 13 ] [ 14 ] Average human height is heavily dependent on the population measured within a geographical region and historical era. The following chart provides a numerical relationship between model scale and multiple figurine scales based on the platonic idea of the height of a human being (humanoid). Many role-playing gamers and wargamers paint their miniatures to differentiate characters or units on a gaming surface (terrain, battle mat, or unadorned table top). Fantasy, role-playing, miniatures, and wargaming conventions sometimes feature miniature painting competitions, such as Games Workshop's Golden Demon contest . There are also many painting competitions on the internet. There are two basic methods of manufacturing figures: centrifugal/gravity casting and plastic injection casting . Most metal and resin figures are made through spin casting . Larger resin models, like buildings and vehicles, are sometimes gravity cast, which is a slower process. To gravity cast, a sculptor develops a master figure, which is then used to create rubber master and production moulds. The production moulds are used to cast the final commercial figures. Polyethylene and polystyrene figures are made by injection moulding. A machine heats plastic and injects it under high pressure into a steel mould. This is an expensive process; it is only cost effective when manufacturing large amounts of figures, since the quantity renders the cost per cast minimal. Many miniatures companies do not produce their figures themselves but leave the manufacturing to specialized casting companies or miniatures companies that have casting facilities. Most miniatures are hand sculpted using two-component epoxy putties in the same size as the final figure. The components of the putty are mixed together to create a sculpting compound that hardens over 48 hours. Some common brands include Polymerics Kneadatite blue\yellow (also known as "green stuff" and "Duro" in Europe), Milliput, A&B, Magic sculpt, and Kraftmark's ProCreate. Until recently, sculptors avoided polymer clays as they cannot withstand the traditional mould-making process. Modern techniques using RTV silicone and softer-quality rubbers have made it possible to use weaker materials, so that polymer clay masters have become more common. Fimo clay is popular, though due to the individual properties of certain colours, only a limited selection of colours is used. Masters for plastic miniatures are often made in a larger scale, often three times the required size. The master is measured with a probe linked to a pantograph that reduces the measurements to the correct size and drives the cutter that makes the moulds. A more recent development is the use of digital 3D models made by computer artists. These digital models create a physical model for mould-making using rapid prototyping techniques. Alternatively, they can be used directly to drive a computer numerical control machine that cuts the steel mould. They can also simply skip moulding steps and directly produce miniatures from 3D models. [ 15 ] Originally, Dungeons & Dragons was an evolution of the Chainmail medieval miniatures game, [ 16 ] with the distinction that each player controlled a single figure and had a wider variety of actions available. The original D&D boxed set bore the subtitle, "Rules for Fantastic Miniature Wargames Campaigns Playable with Paper and Pencil and Miniature Figures". However, Dungeons & Dragons did not require miniatures, referring to them as "only aesthetically pleasing". [ 17 ] Advanced Dungeons & Dragons likewise included a relatively short section describing miniature use, in conjunction with the official AD&D miniatures being produced at the time. [ 18 ] As the game developed, miniatures became more of an optional add-on. [ 19 ] The AD&D 2nd Edition accessory Player's Option: Combat & Tactics introduced a more elaborate grid-based combat system that emphasized the use of miniatures; a streamlined version of some of these concepts appeared in D&D 3rd edition. Although not strictly necessary, the 4th edition of the game assumes the use of miniatures, and many game mechanics refer explicitly to the combat grid. In addition to reducing ambiguity about the size and position of characters, this allows the game to specify rules for reach, threatened areas, and movement rates. The 5th edition de-emphasized these mechanics, and returned the use of miniatures to mostly optional. Some games feature miniatures printed on cardboard or cardstock, and some companies have published such miniatures to be used in place of miniature models.
https://en.wikipedia.org/wiki/Miniature_model_(gaming)
Miniature Pioneering or Model Pioneering is an art form featuring the miniaturized version of pioneering construction. This technique was originally used by Boy Scouts to create a model for campsite planning. [ citation needed ] Models are a convenient way to plan a construction project, requiring the same techniques as a full-scale model, and allowing for accurate equipment lists to be developed, as well as for difficulties in sequencing construction to be identified. However, scout troops in Malaysia are innovating it into a new form of art through competitions. While real pioneering is a combination of wooden spars and ropes, these materials are replaced by wooden sticks and white thread in Miniature Pioneering. Although design and complexity plays a major part in judging the value of a model, lashing quality plays a major role whereby it is evaluated based on the three criteria of tightness, tidiness, and cleanliness. Similar to all handmade models, the activity of making a miniature pioneering model is good training for patience and perfection.
https://en.wikipedia.org/wiki/Miniature_pioneering
Miniature wargaming is a form of wargaming in which military units are represented by miniature physical models on a model battlefield. Miniature wargames are played using model soldiers , vehicles, and artillery on a model battlefield, with the primary appeal being recreational rather than functional. Miniature wargames are played on custom-made battlefields, often with modular terrain, and abstract scaling is used to adapt real-world ranges to the limitations of table space. The use of physical models to represent military units is in contrast to other tabletop wargames that use abstract pieces such as counters or blocks, or computer wargames which use virtual models. The primary benefit of using models is immersion, though in certain wargames the size and shape of the models can have practical consequences on how the match plays out. Models' dimensions and positioning are crucial for measuring distances during gameplay. Issues concerning scale and accuracy compromise realism too much for most serious military applications. Miniature Wargames can be skirmish-level, where individual warriors are controlled, or tactical-level, where groups are commanded. Most wargames are turn-based, involving movement and combat resolved through arithmetic and dice rolls. The setting of a game determines the type of units used, with popular historical themes including WWII, the Napoleonic Wars, and the American Civil War, while Warhammer 40,000 is the leading fantasy setting. Models, historically made from lead or tin, are now typically made of plastic or resin, with larger companies favoring plastic for its mass-production advantages. While some companies sell pre-painted models, most require assembly and customization by players. In historical miniature wargames, generic models are used, but fantasy wargames, like Warhammer, feature proprietary models, making them more expensive. The community is social, with conventions and clubs playing a significant role. Painting and assembling models are integral aspects of the hobby. The hobby primarily attracts older enthusiasts due to the time, skill, and financial investment required. A miniature wargame is played with miniature models of soldiers, artillery, and vehicles on a model of a battlefield. The benefit of using models as opposed to abstract pieces is primarily an aesthetic one. Models offer a visually-pleasing way of identifying the units on the battlefield. In most miniature wargame systems, the model itself may be irrelevant as far as the rules are concerned; what really matters is the dimensions of the base that the model is mounted on. Distances between infantry units are measured from the base of the model. [ 1 ] The exception to this trend may be models of vehicles such as tanks, which do not require a base to be stable and have naturally rectangular shapes; in such cases, the distances between units may be measured from the edge of the model itself. Some miniature wargames use the dimensions of the model to determine whether a target behind cover is within line-of-fire of an attacker. Most miniature wargames are turn-based. Players take turns to move their model warriors across the model battlefield and declare attacks on the opponent. In most miniature wargames, the outcomes of fights between units are resolved through simple arithmetic, usually combined with dice rolls or playing cards. All historical wargames have a setting that is based on some historical era of warfare. The setting determines what kind of units the players can deploy in their match. For instance, a wargame set in the Napoleonic Wars should use models of Napoleonic-era soldiers, wielding muskets and cannons, and not spears or automatic rifles. A fantasy wargame has a fictional setting and may thus feature fictional or anachronistic armaments, but the setting should be similar enough to some real historical era of warfare so as to preserve a reasonable degree of realism. [ 2 ] For instance, Warhammer Age of Sigmar is mostly based on medieval warfare, but includes supernatural elements such as wizards and dragons. The most popular historical settings are World War 2, the Napoleonic Wars, and the American Civil War (in that order). [ 3 ] The most popular fantasy setting is Warhammer 40,000 . [ 4 ] [ 5 ] Miniature wargames are played either at the skirmish level or the tactical level. At the skirmish level, the player controls the warriors individually, whereas in a tactical level game he or she controls groups of warriors—typically the model warriors are mounted in groups on the same base. Miniature wargames are not played at the strategic or operational level because at that scale the models would become imperceptibly tiny. Miniature wargames are generally played for recreation, as the physical limitations of the medium prevents it from representing modern warfare accurately enough for use in military instruction and research (see the section below on abstract scaling for one reason). A historical exception to this is naval wargaming before the advent of computers. Historically, these models were commonly made of tin or lead, but nowadays they are usually made of polystyrene or resin. Plastic models are cheaper to mass-produce but require a larger investment because they require expensive steel molds. Lead and tin models, by contrast, can be cast in cheap rubber molds. Larger firms such as Games Workshop prefer to produce plastic models, whereas smaller firms with less money prefer metal models. [ 6 ] Wargaming figurines often come with unrealistic body proportions. Their hands may be oversized, or their rifles excessively thick. One reason for this is to make the models more robust: thicker parts are less likely to bend or break. Another reason is that manufacturing methods often stipulate a minimum thickness for casting because molten plastic has difficulty flowing through thin channels in the mold. Finally, odd proportions may actually make the model look better for its size by accentuating certain features that the human eye focuses on. [ 7 ] Wargaming models are often sold in parts. In the case of plastic models, they're often sold still affixed to their sprues . The player is expected to cut out the parts and glue them together. This is the norm because, depending on the design of the model, it may not be possible to mold it whole, and selling the parts un-assembled saves on labor costs. After assembling the model, the player should then paint it to make it more presentable and easier to identify on the game table. Understandably, the time and skill involved in assembling and painting models deters many people from miniature wargaming. Some firms have tried to address this by selling pre-assembled and pre-painted models, but these are rare because, with current technologies, it is hard to mass-produce ready-to-play miniatures that are both cheap and match the beauty of hand-painted models. [ 8 ] The other options for players are to buy finished models second-hand or hire a professional painter. Historical miniature wargames are typically designed to use generic models. It is generally not possible to copyright the look of a historical soldier. Anyone, for instance, may freely produce miniature models of Napoleonic infantrymen. A player of a Napoleonic-era wargame could thus obtain their models from any manufacturer who produces Napoleonic models at the requisite scale. Consequently, it is difficult if not impossible for a historical wargame designer to oblige players to buy models from a certain manufacturer. By contrast, fantasy wargames feature fictional warriors, and fictional characters can be copyrighted. By incorporating original characters into their wargame, a wargame designer can oblige the player to purchase their models from a specific manufacturer who is licensed to produce the requisite models. An example of this is Warhammer 40,000 which features many original characters who have a distinctive aesthetic, and Games Workshop and its subsidiaries reserve the exclusive right to manufacture models of these characters. Games Workshop models tend to be expensive because competing manufacturers are not allowed to offer cheaper copies of official Warhammer 40,000 models. While there's nothing to stop players using foreign wargaming models (generics or proprietary models from other wargames), doing so could spoil the aesthetic and cause confusion. A miniature wargame is played on a model of a battlefield. The model battlefield is usually mounted on a table. As far as size goes, every part of the battlefield should be within arm's reach of the players; a width of four feet is recommended. [ 9 ] [ 10 ] [ 11 ] Most miniature wargames are played on custom-made battlefields made using modular terrain models. Historical wargamers sometimes re-enact historical battles, but this is relatively rare. Players more often prefer to design their own scenarios. The first advantage is that they can design a scenario that fits the resources they have at hand, whereas reconstructing a historical battle may require them to purchase additional models and rulebooks, and perhaps a larger game table. The second advantage is that a fictional scenario can be designed such that either player has a fair chance of winning. [ 12 ] Miniature wargames are rarely set in urban environments. The first reason is that it is harder to reach models when there are many buildings in the way. Another reason is that the buildings may highlight the abstract scale at which the wargame operates. For instance, in the 28 mm wargame Bolt Action , a rifle's range is 24 inches, which is barely the length of a few houses at 28 mm scale. If placed in an urban environment, a rifleman would not be able to hit a target at the far end of a small street, which shatters the illusion of realism. [ 13 ] The scale of a model vehicle can be expressed as a scale ratio. A scale ratio of 1:100 means that 1 cm represents 100 cm; at this scale, if a model car is 4.5 cm long, then it represents a real car that is 4.5 m long. When it comes to figurines of humans, the preferred method of expressing scale is the height of a figurine in millimeters. There is no standardized system of measuring figurine size in the wargaming hobby. Some manufacturers measure the height of a figurine up to the crown of the head, whereas others may measure it up to the eyes (the latter is more sensible if the figurine is wearing a hat). [ 14 ] Furthermore, the advertised scale of a model may not reflect its actual scale. In order to make their products stand out against their competitors, some manufacturers make their models a little oversized, e.g. a model from a certain manufacturer that is advertised as suitable for 28 mm wargames could actually be 30 mm tall in practice. This makes the model look more imposing, and allows for more detail. [ 15 ] Manufacturers of generic wargaming models are generally obliged to build their models to some standard scale so as to ensure compatibility with third-party wargames. Manufacturers who make proprietary models designed exclusively for use in a specific wargame do not have this concern. For instance, Warhammer 40,000 officially does not have a scale. It doesn't need to conform to a standard scale, because Games Workshop is the exclusive manufacturer of official Warhammer 40,000 models, said models are intended exclusively for use in Warhammer 40,000 , and Games Workshop doesn't want players using foreign models from other manufacturers. Most miniature wargames do not have an absolute scale, i.e. where the figurines, terrain, movement and firing ranges all conform to single scale ratio. This is largely because of the need to compress the battle into the confined space of a table surface. Instead, miniature wargames prefer to use abstract scaling . For example, a 28 mm model rifleman realistically ought to be able to hit a target from 20 feet away, [ b ] but this is larger than most tables. A miniature wargame would not be much fun if the models could shoot each other from opposite ends of the table, and thus not have to maneuver around the battlefield. The 28 mm wargame Bolt Action solves this problem by compressing the range of a rifle to just 24 inches; [ 10 ] likewise, a sub-machine gun's range is 12 inches and a pistol's range is 6 inches. [ 16 ] These ranges may not be realistic, but at least their proportions do make intuitive sense, giving an illusion of realism. Abstract scaling may also be applied to figures and terrain features, e.g. model houses and trees may be a little undersized compared the scale so as to make more room on the table for the warriors. Like wise model figures will often be oversize for the scale, for example many games use 25 mm figures appropriate to a 1:60 scale when the game is played at a larger scale such as 1:360. Most miniature wargames do not have a fixed time scale (i.e. how many seconds a turn represents). Most wargame rulebooks instead prefer to define how far a unit can move in a turn, and this movement range is proportioned to the size of a typical game table. For example, Bolt Action sets a movement range of six inches in a turn for most units. There are many miniature wargaming rules, not all of which are currently in print, including some which are available free on the internet; many gamers also write their own, creating so-called "house rules" or "club sets". Most rules are intended for a specific historical period or fictional genre. Rules also vary in the model scale they use: one infantry figure may represent one man, one squad, or much larger numbers of actual troops. Wargaming in general owes its origins to military simulations , most famously to the Prussian staff training system Kriegsspiel . Consequently, rules designers struggle with the perceived obligation to actually 'simulate' something, and with the seldom compatible necessity to make an enjoyable 'game'. Historical battles were seldom fair or even, and the potential detail that can be brought to bear to represent this in a set of rules always comes at the cost of pace of the game and enjoyment. In Osprey Publishing 's book about the Battle of Crécy , from its series on historical campaigns, there is included a detailed section on wargaming the battle, in which Stuart Asquith writes: When refighting a particular battle, it is important to adhere as closely as possible to the original historical engagement. The counter-argument is that the wargamer(s) know who is going to win. Fair comment, but knowing the outcome of any battle does not usually prevent one from reading about that action, so why should such knowledge debar a refight? [ 17 ] He adds that unless at least the initial moves are recreated, "then an interesting medieval battle may well take place, but it will not be a re-creation of Crécy." [ 17 ] Still, rules aimed at the non-professional hobby market therefore inevitably contain abstractions. It is generally in the area of the abstraction liberties taken by the designers that the differences between rules can be found. Most follow tried and true conventions to the extent that a chess player would recognize wargaming merely as a different scaled version of his or her own game. During the 1960s and 1970s, two new trends in wargaming emerged: First were small-unit rules sets which allowed individual players to portray small units down to even a single figure. These rules expanded the abilities of the smaller units accordingly, to magnify their effect on the overall battle. Second was an interest in fantasy miniatures wargaming. J.R.R. Tolkien 's novel The Hobbit and his epic cycle The Lord of the Rings were gaining strong interest in the United States, and as a result, rules were quickly developed to play medieval and Roman -era wargames, where these eras had previously been largely ignored in favor of Napoleonic and American Civil War gaming. The two converged in the early 1970s. The first known occurrence, from 1970, is a set of rules by Len Patt [ 18 ] [ 19 ] published in The New England Wargames Association's bulletin, The Courier. In 1971 a set of medieval miniatures rules entitled Chainmail , published by a tiny company called Guidon Games , headquartered in Belfast, Maine [ 20 ] included a fantasy supplement detailing rules for battle involving fantastic creatures. Later, in 1974, TSR designer E. Gary Gygax wrote a set of rules for individual characters under Chainmail , and entitled it Dungeons & Dragons . Further developments ensued, and the role-playing game hobby quickly became distinct from the wargaming hobby which preceded it. Although generally less popular than wargames set on land, naval wargaming nevertheless enjoys a degree of support around the world. Model ships have long been used for wargaming, but it was the introduction of elaborate rules in the early 20th century that made the hobby more popular. Small miniature ships, often in 1:1200 scale and 1:1250 scale , were maneuvered on large playing surfaces to recreate historical battles. Prior to World War II, firms such as Bassett-Lowke in England and the German company Wiking marketed these to the public. [ 21 ] [ 22 ] After World War II, several manufacturers started business in Germany, which remains the center of production to this day, [ 23 ] while other companies started in England and the United States. Rules can vary greatly between game systems; both in complexity and era. Historical rulesets range from the ancient and medieval ships to the fleets of the Age of Sail and the modern era . Often the hobbyists have to provide their own models of ships. The 1972 game, Don't Give Up The Ship! , called for pencil and paper, six-sided dice, rulers and protractors , and model ships, ideally of 1:1200 scale. The elaborate rules cover morale, sinking, fires, broken masts, and boarding . Dice determined wind speed and direction, and hence the ship's speed and the use of its cannon by measuring angles with the protractor. [ 24 ] In naval wargaming of the modern period, General Quarters , primarily (though not exclusively) using six-sided dice, has established itself as one of the leading sets of World War I and II era rules. [ 25 ] Some land-based miniature wargames have also been adapted to naval wargaming. All at Sea , for example, is an adaptation of The Lord of the Rings Strategy Battle Game rules for naval conflicts. The game's mechanics centered around boarding parties, with options for ramming actions and siege engines . [ 26 ] As such, the ship's scale ratio corresponds to the 25 mm scale miniatures used by The Lord of the Rings . Model ships are built by hobbyists, just as normal miniature terrain, such as " great ships " of Pelargir , cogs of Dol Amroth and Corsair galleys . [ 27 ] Air wargaming, like naval wargaming, is a smaller niche within the larger hobby of miniatures wargaming. Aerial combat has developed over a relatively short time compared with naval or land warfare. As such, air wargaming tends to break down into three broad periods: In addition there are science fiction and "alternative history" games such as Aeronefs and those in the Crimson Skies universe. Wargaming was invented in Prussia near the end of the 18th century. The earliest wargames were based on chess; the pieces represented real military units (artillery, cavalry, etc.) and squares on the board were color-coded to represent different terrain types. Later wargames used realistic maps over which troop pieces could move in a free-form manner, and instead of chess-like sculpted pieces they used little rectangular blocks because they were played at smaller scales (e.g. 1:8000). The Prussian army formally adopted wargaming as a training tool in 1824. After Prussia defeated France in the Franco-Prussian War of 1870, wargaming spread around the world and was played enthusiastically by both officers and civilians. In 1881, the Scottish writer Robert Louis Stevenson became the first documented person to use toy soldiers in a wargame, and thus he might be the inventor of miniature wargaming, although he never published his rules. According to an account by his stepson, they were very sophisticated and realistic, on par with German military wargames. Stevenson played his wargame on the floor, on a map drawn with chalk. [ 28 ] The English writer H. G. Wells developed his own codified rules for playing with toy soldiers, which he published in a book titled Little Wars (1913). This is widely remembered as the first rulebook for miniature wargaming. Little Wars had very simple rules to make it fun and accessible to anyone. Little Wars did not use dice or computation to resolve fights. For artillery attacks, players used spring-loaded toy cannons which fired little wooden cylinders to physically knock over enemy models. As for infantry and cavalry, they could only engage in hand-to-hand combat (even if the figurines exhibited firearms). When two infantry units fought in close quarters, the units would suffer non-random losses determined by their relative sizes. Little Wars was designed for a large field of play, such as a lawn or the floor of a large room, because the toy soldiers available to Wells were too large for tabletop play. An infantryman could move up to one foot per turn, and a cavalryman could move up to two feet per turn. To measure these distances, players used a two-foot long piece of string. Wells was also the first wargamer to use models of buildings, trees, and other terrain features to create a three-dimensional battlefield. [ 29 ] Wells' rulebook was for a long time regarded as the standard system by which other miniature wargames were judged. However, the nascent miniature wargaming community would remain very small for a long time to come. A possible reason was the two World Wars, which de-glamorized war and caused shortages of tin and lead that made model soldiers expensive. [ 30 ] [ 31 ] Another reason may have been the lack of magazines or clubs dedicated to miniature wargames. Miniature wargaming was seen as a niche within the larger hobby of making and collecting model soldiers. In 1955, an American named Jack Scruby began making inexpensive miniature models for miniature wargames out of type metal . Scruby's major contribution to the miniature wargaming hobby was to network players across America and the UK. At the time, the miniature wargaming community was minuscule, and players struggled to find each other. In 1956, Scruby organized the first miniature wargaming convention in America, which was attended by just fourteen people. From 1957 to 1962, he self-published the world's first miniature wargaming magazine, titled The War Game Digest , through which wargamers could publish their rules and share game reports. It had less than two hundred subscribers, but it did establish a community that kept growing. [ 32 ] Around the same time in the United Kingdom, Donald Featherstone began writing an influential series of books on wargaming, which represented the first mainstream published contribution to wargaming since Little Wars . Titles included : War Games (1962), Advanced Wargames , Solo Wargaming , Wargame Campaigns , Battles with Model Tanks , Skirmish Wargaming . Such was the popularity of such titles that other authors were able to have published wargaming titles. This output of published wargaming titles from British authors coupled with the emergence at the same time of several manufacturers providing suitable wargame miniatures (e.g. Miniature Figurines, Hinchliffe, Peter Laing, Garrison, Airfix , Skytrex, Davco, Heroic & Ros) was responsible for the huge upsurge of popularity of the hobby in the late 1960s and into the 1970s. [ 33 ] In 1956, Tony Bath published what was the first ruleset for a miniature wargame set in the medieval period. In 1971, Gary Gygax developed his own miniature wargame system for medieval warfare called Chainmail . Gygax later produced a supplement for Chainmail that added magic and fantasy creatures, making this the first fantasy miniature wargame. This supplement was inspired by the growing popularity of The Lord of the Rings novels by J. R. R. Tolkien . Gygax later went on to develop the first tabletop role-playing game: Dungeons & Dragons . Dungeons & Dragons was a story-driven game, but adapted wargaming rules to model the fights players could get in. Battles in Dungeons and Dragons rarely featured more than a dozen combatants, so the combat rules were designed to model the capabilities of the warriors in very great detail. Strictly speaking, Dungeons & Dragons did not require miniature models to play, but many players found that using miniature models made the fights easier to arbitrate and more immersive. In 1983, a British company called Games Workshop released a fantasy miniature wargame called Warhammer , [ c ] which was the first miniature wargame designed to use proprietary models. Games Workshop at the time made miniature models for use in Dungeons & Dragons . Warhammer was meant to encourage customers to buy more of these models. Whereas miniature models were optional in Dungeons & Dragons , Warhammer mandated their use and the battles tended to be larger. [ 34 ] Initially, Warhammer had a threadbare fictional setting and used generic stock characters common to fantasy fiction, but as time went on, Games Workshop expanded the setting with original characters with distinctive visual designs. Games Workshop's official line of models for Warhammer eventually took on such a distinctive look that rival manufacturers could not produce similar-looking models without risking a lawsuit over copyright infringement. Although there was nothing to stop players of Warhammer from using foreign models from third-party manufacturers, doing so could spoil the aesthetic and cause confusion. In 1987, Games Workshop released a science-fiction spinoff of Warhammer called Warhammer 40,000 . Like Warhammer , Warhammer 40,000 obliged players to buy proprietary models from Games Workshop. Warhammer 40,000 became even more successful than Warhammer . The success of the Warhammer games promoted the sales of Games Workshop's line of gaming models. Other game companies sought to emulate Games Workshop's business model. Examples include Mantic Games , Fantasy Flight Games , Privateer Press , and Warlord Games , all of which have released their own miniature wargame systems that were designed to promote sales of their respective lines of proprietary gaming models. This business model has proven lucrative, and thanks to the marketing resources of these companies, sci-fi / fantasy wargames have displaced historical wargames in popularity. Players of miniature wargames tend to be more extroverted than players of board wargames and computer wargames. [ 35 ] [ 36 ] Players of miniature wargames are obliged to meet in person and play in the same room around a table, whereas board wargames can be played via correspondence and computer wargames can be played online; therefore miniature wargaming places a premium on sociability. (This has changed somewhat with the onset of the COVID-19 pandemic. Wargamers (miniature and board) have become quite creative in devising ways to play games while maintaining social distancing.) [ 37 ] Consequently, conventions and clubs are important to the wargaming community. Some conventions have become very large affairs, such as Gen-Con, Origins and Historical Miniatures Gaming Society 's Historicon , called the "mother of all wargaming conventions". [ 36 ] Players also tend to be middle-aged or older. One reason is that the hobby is expensive and needs higher disposable income that older people tend to have.
https://en.wikipedia.org/wiki/Miniature_wargaming
A minichromosome is a small chromatin -like structure resembling a chromosome and consisting of centromeres , telomeres and replication origins [ 1 ] but little additional genetic material. [ 2 ] [ self-published source? ] They replicate autonomously in the cell during cellular division . [ 3 ] Minichromosomes may be created by natural processes as chromosomal aberrations or by genetic engineering . [ 1 ] Minichromosomes can be either linear or circular pieces of DNA . [ 3 ] By minimizing the amount of unnecessary genetic information on the chromosome and including the basic components necessary for DNA replication (centromere, telomeres, and replication sequences), molecular biologists aim to construct a chromosomal platform which can be utilized to insert or present new genes into a host cell . [ 3 ] Producing minichromosomes by genetic engineering techniques involves two primary methods, the de novo (bottom-up) and the top-down approach. [ 1 ] The minimum constituent parts of a chromosome (centromere, telomeres, and DNA replication sequences) are assembled [ 4 ] by using molecular cloning techniques to construct the desired chromosomal contents in vitro . Next, the desired contents of the minichromosome must be transformed into a host which is capable of assembling the components (typically yeast or mammalian cells [ 5 ] ) into a functional chromosome. This approach has been attempted for the introduction of minichromosomes into maize for the possibility of genetic engineering, but success has been limited and questionable. [ 6 ] In general, the de novo approach is more difficult than the top-down method due to species incompatibility issues and the heterochromatic nature of centromeric regions. [ 5 ] This method utilizes the mechanism of telomere -mediated chromosomal truncation (TMCT). This process is the generation of truncation by selective transformation of telomeric sequences into a host genome. This insertion causes the generation of more telomeric sequences and eventual truncation. [ 3 ] The newly synthesized truncated chromosome can then be altered through the insertion of new genes for desired traits. The top-down approach is generally considered as the more plausible means of generating extra-numerary chromosomes for the use of genetic engineering of plants. In particular it is useful because their stability during cell division has been demonstrated. [ 7 ] The limitation of this approach is that it is labor-intensive. Unlike traditional methods of genetic engineering, minichromosomes can be used to transfer and express multiple sets of genes onto one engineered chromosome package. [ 8 ] Traditional methods which involve the insertion of novel genes into existing sequences may result in the disruption of endogenous genes [ 1 ] and thus negatively affect the host cell. Additionally, with traditional gene insertion methods, scientists have had less ability to control where the newly inserted genes are located on the host cell chromosomes, [ 9 ] which makes it difficult to predict inheritance of multiple genes from generation to generation. Minichromosome technology allows for the stacking of genes side-by-side on the same chromosome thus reducing likelihood of segregation of novel traits. In 2006, scientists demonstrated the successful use of telomere truncation in maize plants to produce minichromosomes that could be utilized as a platform for inserting genes into the plant genome. [ 10 ] In plants, the telomere sequence is conserved, which implies that this strategy can be utilized to successfully construct additional minichromosomes in other plant species. [ 1 ] In 2007, scientists reported success in assembling minichromosomes in vitro using the de novo method. [ 6 ] The use of minichromosomes as a means for generating more desirable crop traits is actively being explored. Major advantages include the ability to introduce genetic information which is highly compatible with the host genome. This eliminates the risk of disrupting various important processes such as cell division and gene expression. With continued development, the future for use of minichromosomes may make a huge impact on the productivity of major crops. [ 11 ] Minichromosomes have also been successfully inserted into yeast and animal cells. These minichromosomes were constructed using the de novo approach. [ 3 ]
https://en.wikipedia.org/wiki/Minichromosome
Minicircles are small (~4 kb ) circular replicons . They occur naturally in some eukaryotic organelle genomes . In the mitochondria -derived kinetoplast of trypanosomes , minicircles encode guide RNAs for RNA editing . [ 1 ] In Amphidinium , the chloroplast genome is made of minicircles that encode chloroplast proteins. [ 2 ] [ 3 ] Minicircles are small (~4kb) circular plasmid derivatives that have been freed from all prokaryotic vector parts. They have been applied as transgene carriers for the genetic modification of mammalian cells, with the advantage that, since they contain no bacterial DNA sequences, they are less likely to be perceived as foreign and destroyed. (Typical transgene delivery methods involve plasmids, which contain foreign DNA.) The smaller size of minicircles also extends their cloning capacity and facilitates their delivery into cells. Their preparation usually follows a two-step procedure: [ 4 ] [ 5 ] The purified minicircle can be transferred into the recipient cell by transfection or lipofection and into a differentiated tissue by, for instance, jet injection . Conventional minicircles lack an origin of replication , so they do not replicate within the target cells and the encoded genes will disappear as the cell divides (which can be either an advantage or disadvantage depending on whether the application demands persistent or transient expression). A novel addition to the field are nonviral self-replicating minicircles, which owe this property to the presence of a S/MAR-Element . Self-replicating minicircles hold great promise for the systematic modification of stem cells and will significantly extend the potential of their plasmidal precursor forms ("parental plasmids"), the more as the principal feasibility of such an approach has amply been demonstrated for their plasmidal precursor forms. [ 6 ] [ 7 ] [ 8 ] [ 9 ]
https://en.wikipedia.org/wiki/Minicircle
Miniclip is a Swiss mobile game publisher and former browser game website that was first launched on 30 March 2001. [ 2 ] It was started by Robert Small and Tihan Presbie with a budget of £40,000. [ 3 ] In 2008, Miniclip was valued at over £275 million. [ 4 ] In 2018, the company gained over $400 million in revenue through its mobile gaming hit, 8 Ball Pool . [ 5 ] [ 6 ] As of July 2009, over 400 applications were hosted on its own website. [ 7 ] In February 2015, Tencent acquired majority stakes of Miniclip. [ 8 ] [ 1 ] In December 2016, Miniclip surpassed 1 billion downloads across its published mobile games on iOS -based, Android -based, and Windows Phone -based devices. In March 2022, Miniclip announced that it had reached 4 billion downloads worldwide with 8 Ball Pool alone accounting for 1 billion of them. [ 9 ] [ 10 ] [ 11 ] In April 2021, Miniclip had celebrated its 21st anniversary. In response, the CEO of Miniclip claimed that it would be keeping away from developing browser-based games to prioritize its mobile gaming products, including Agar.io , 8 Ball Pool , Mini Militia , Ludo Party and more. [ 12 ] In April 2022, Miniclip announced that it would begin prioritizing its mobile games. As a result, the browser game portal was shut down in July 2022 and the website lost all but its two most popular games of the time, Agar.io and 8 Ball Pool . [ 13 ] [ 14 ] [ 15 ] In June 2022, Miniclip agreed to acquire SYBO , the co-publisher and co-developer of Subway Surfers , in an undisclosed deal. [ 16 ] [ 17 ] The deal with SYBO went through in July 2022. [ 18 ] In November 2024, Embracer Group announced that they would divest Easybrain to Miniclip for a consideration of $1.2 billion, with the transaction expected to close in early 2025. [ 19 ] Miniclip has developed and published numerous mobile games for iOS , Android , Symbian , and Windows Phone . This includes 8 Ball Pool , Golf Battle , Gravity Guy , Bloons Tower Defense , On The Run , Plague Inc. for Android, Berry Rush , Agar.io , Diep.io , Mini Militia , Ludo Party and Head Ball 2 and Cricket League In September 2012, Microsoft announced on the Windows blog on 31 August 2012 (see also List of Xbox games on Windows ) that Miniclip games would be able to distribute their games on the Xbox division of Windows 8 . Miniclip games that are supported by Xbox for Windows 8 include Gravity Guy , iStunt 2 , and Monster Island . Gravity Guy was released on Windows Store on 29 November 2010. In April 2013, most Miniclip games for Windows 8 and Windows Phone were distributed for free for one year. [ 26 ] On 14 February 2017, Miniclip released their first mobile racer game which was compatible with Xbox One , PC , and PlayStation 4 , titled MX Nitro . [ 27 ] [ 28 ] On 1 September 2005, the United States Computer Emergency Readiness Team issued an advisory concerning Miniclip: The Retro64 / Miniclip CR64 Loader ActiveX control contains a buffer overflow vulnerability. This may allow a remote, unauthenticated attacker to execute an arbitrary code on a vulnerable system. Although the ActiveX control is no longer in use by either retro64.com or miniclip.com, any system that has used certain pages of these websites in the past (prior to September 2005) may be vulnerable. [ 29 ] In 2006, several security firms reported that some Miniclip users had installed a "miniclipgameloader.dll" which contained the hostile code identified as "Trojan Downloader 3069.” [ 30 ] In the same year, another download related to Miniclip installed "High Risk" malware called "Trojan-Downloader.CR64Loader.” [ 31 ]
https://en.wikipedia.org/wiki/Miniclip
The Minidish is the tradename used for the small-sized satellite dish used by Freesat and Sky . The term has entered the vocabulary in the UK and Ireland as a generic term for a satellite dish, particularly small ones. [ citation needed ] The Minidish is an oval, mesh satellite dish capable of reflecting signals broadcast in the upper X band and K u band . Two sizes exist: The Minidish uses a non-standard connector for the LNB , consisting of a peg about 4 cm (1.6 in) in width and 7.5 mm (0.30 in) in height prior to the mark 4 dishes introduced in 2009, as opposed to the 40 mm collar. [ clarification needed ] This enforces the use of Sky-approved equipment, but also ensures that a suitable LNB is used. Due to the shape of the dish, an LNB with an oval feedhorn is required to get full signal. [ 1 ] This article relating to television in the United Kingdom is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Minidish
Miniflex is an X-ray diffraction (XRD) analytical measuring instrument produced by Rigaku . The current instrument is the fourth in a series introduced in 1973. The Rigaku MiniFlex is historically significant in that it was the first commercial benchtop (tabletop) X-ray diffraction instrument. When introduced in 1973, the original Miniflex was about one-tenth the size, and dramatically less expensive, than conventional X-ray diffraction equipment of the period. The original instrument, and its successor that was introduced in 1976, employed a horizontal goniometer with data output provided by an internal strip chart recorder . The third generation device, introduced in 1995, was called Miniflex+. It provided a dramatic advance in x-ray power to 450 watts by operating at 30kV and 15mA as well as in the technology with computer control. [ 1 ] [ 2 ] Both the Miniflex+ and the current generation product employ a vertical goniometer that allowed the use of a 6-position automatic sample changer. The Miniflex II was introduced in 2006 and offered the advance of a monochromatic X-ray source and a 1D silicon strip detector. The fifth generation (Gen 5) MiniFlex600 system, introduced in 2012, built upon this legacy with 600W of tube power and new PDXL powder diffraction software. 2017 saw the introduction of the 6th generation MiniFlex incorporating a 2D hybrid pixel array detector (HPAD) and 8-position automatic sample changer. With the new generation of MiniFlex comes an update to the SmartLab Studio II software. It now offers important new functionality; including a fundamental parameter method (FP) for more accurate peak calculation, phase identification using the Crystallography Open Database (COD), and a wizard for ab inito crystal structure analysis. The simplicity and relatively low cost of the Miniflex line has led to its widespread use in colleges and universities to illustrate the physics behind X-ray diffraction . [ 3 ] In addition to Chemistry and Physics , many Geology departments also employ the Miniflex to teach mineralogy . [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] The MiniFlex is used in a wide range of industries as well as colleges and universities for research. Since 1973 over 12,000 patents and scientific papers have been published using data collected by a MiniFlex. [ 9 ]
https://en.wikipedia.org/wiki/Miniflex
A minigene is a minimal gene fragment that includes an exon and the control regions necessary for the gene to express itself in the same way as a wild type gene fragment. This is a minigene in its most basic sense. More complex minigenes can be constructed containing multiple exons and intron(s) . Minigenes provide a valuable tool for researchers evaluating splicing patterns both in vivo and in vitro biochemically assessed experiments. [ 1 ] [ 2 ] Specifically, minigenes are used as splice reporter vectors (also called exon-trapping vectors) and act as a probe to determine which factors are important in splicing outcomes. They can be constructed to test the way both cis-regulatory elements (RNA effects) and trans-regulatory elements (associated proteins / splicing factors ) affect gene expression. [ 3 ] Minigenes were first described as the somatic assembly of DNA segments and consisted of DNA regions known to encode the protein and the flanking regions required to express the protein. The term was first used in a paper in 1977 to describe the cloning of two minigenes that were designed to express a peptide. [ 4 ] RNA splicing was discovered in the late 1970s through the study of adenoviruses that invade mammals and replicate inside them. Researchers identified RNA molecules that contained sequences from noncontiguous parts of the virus’s genome. This discovery led to the conclusion that regulatory mechanisms existed which affected mature RNA and the genes it expresses. [ 5 ] Using minigenes as a splice reporting vector to explore the effects of RNA splicing regulation naturally followed and remains the major use of minigenes to date. In order to provide a good minigene model, the gene fragment should have all of the necessary elements to ensure it exhibits the same alternative splicing (AS) patterns as the wild type gene, i.e., the length of the fragment must include all upstream and downstream sequences which can affect its splicing. [ 1 ] [ 2 ] Therefore, most minigene designs begin with a thorough in silico analysis of the requirements of the experiment before any "wet" lab work is conducted. [ 6 ] With the advent of Bioinformatics and widespread use of computers, several good programs now exist for the identification of cis-acting control regions that affect the splicing outcomes of a gene [ 7 ] [ 8 ] and advanced programs can even consider splicing outcomes in various tissue types. [ 9 ] Differences in minigenes are usually reflected in the final size of the fragment, which is in turn a reflection of the complexity of the minigene itself. The number of foreign DNA elements (exon and introns) inserted into the constitutive exons and introns of a given fragment varies with the type of experiment and the information being sought. A typical experiment might involve wild type minigenes which are expected to express genes normally in a comparison run against genetically engineered allelic variations which replace the wild-type gene and have been cloned into the same flanking sequences as the original fragment. These types of experiments help to determine the effect of various mutations on pre-mRNA splicing. [ 3 ] Once a suitable genomic fragment is chosen (Step 1), the exons and introns of the fragment can be inserted and amplified, along with the flanking constitutive exons and introns of the original gene, by PCR . The primers for PCR can be chosen so that they leave " sticky ends " at 3' sense and anti-sense strands (Step 2). These "sticky-ends" can be easily incorporated into a TOPO Vector by ligation into a commercially available source which has ligase already attached to it at the sight of incorporation [ 10 ] (Step 3). The subsequent TOPO Vectors can be transfected into E.coli cells (Step 4). After incubation, total RNA can be extracted from the bacterial colonies and analyzed using RT-PCR to quantify ratios of exon inclusion/exclusion (step 5). The minigene can be transfected into different cell types with various splicing factors to test trans-acting elements (Step 6). The expressed genes or the proteins they encode can be analyzed to evaluate splicing components and their effects via a variety of methods including hybridization or size-exclusion chromatography . [ 1 ] [ 2 ] RNA splicing errors have been estimated to occur in a third of genetic diseases. [ citation needed ] To understand pathogenesis and identify potential targets of therapeutic intervention in these diseases, explicating the splicing elements involved is essential. [ 11 ] Determining the complete set of components involved in splicing presents many challenges due to the abundance of alternative splicing, which occurs in most human genes, and the specificity in which splicing is carried out in vivo . [ 2 ] Splicing is distinctly conducted from cell type to cell type and across different stages of cellular development. Therefore, it is critical that any in vitro or bioinformatic assumptions about splicing regulation are confirmed in vivo . [ 12 ] Minigenes are used to elucidate cis -regulatory elements, trans -regulatory elements and other regulators of pre-mature RNA splicing in vivo . [ 2 ] Minigenes have been applied to the study of a diverse array of genetic diseases due to the aforementioned abundance of alternatively spliced genes and the specificity and variation observed in splicing regulation. [ 1 ] [ 2 ] [ 12 ] The following are examples of minigene use in various diseases. While it is not an exhaustive list, it does provide a better understanding of how minigenes are utilized. RNA splicing errors can have drastic effects on how proteins function, including the hormones secreted by the endocrine system . These effects on hormones have been identified as the cause of many endocrine disorders including thyroid-related pathological conditions, rickets , hyperinsulinemic hypoglycemia and congenital adrenal hyperplasia . [ 13 ] One specific example of a splicing error causing an endocrine disease that has been studied using minigenes is a type of growth hormone deficiency called isolated growth hormone deficiency (IGHD), a disease that results in growth failure. IGHD type II is an autosomal dominant form caused by a mutation in the intervening sequence (IVS) adjacent to exon 3 of the gene encoding growth hormone 1, the GH-1 gene. This mutated form of IVS3 causes exon 3 to be skipped in the mRNA product. The mRNA (-E3) encodes a truncated form of hGH that then inhibits normal hGH secretion. Minigenes were used to determine that a point mutation within an intron splice enhancer (ISE) embedded in IVS3 was to blame for the skipping of E3. Moreover, it was determined that the function of the ISE is influenced by a nearby transposable AC element , revealing that this particular splicing error is caused by a trans-acting factor. [ 14 ] Accumulation of tau protein is associated with neurodegenerative diseases including Alzheimer's and Parkinson's diseases as well as other tauopathies . [ 15 ] Tau protein isoforms are created by alternative splicing of exons 2, 3 and 10. The regulation of tau splicing is specific to stage of development, physiology and location. Errors in tau splicing can occur in both exons and introns and, depending on the error, result in changes to protein structure or loss of function. [ 16 ] Aggregation of these abnormal tau proteins correlates directly with pathogenesis and disease progression. Minigenes have been used by several researchers to help understand the regulatory components responsible for mRNA splicing of the TAU gene. [ 15 ] [ 16 ] [ 17 ] Cancer is a complex, heterogeneous disease that can be hereditary or the result of environmental stimuli. [ 18 ] Minigenes are used to help oncologists understand the roles pre-mRNA splicing plays in different cancer types. Of particular interest are cancer specific genetic mutations that disrupt normal splicing events, including those affecting spliceosome components and RNA-binding proteins such as heterogeneous nuclear ribonucleoparticules (hnRNP) , serine/arginine-rich (SR) proteins and small ribonucleoproteins (snRNP) . [ 19 ] [ 20 ] Proteins encoded by aberrantly spliced pre-mRNAs are functionally different and contribute to the characteristic anomalies exhibited by cancer cells, including their ability to proliferate, invade and undergo angiogenesis, and metastasis. [ 20 ] Minigenes help researchers identify genetic mutations in cancer that result in splicing errors and determine the downstream effects those splicing errors have on gene expression. [ 21 ] Using knowledge obtained from studies employing minigenes, oncologists have proposed tests designed to detect products of abnormal gene expression for diagnostic purposes. [ 22 ] Additionally, the prospect of using minigenes as a cancer immunotherapy is being explored. [ 23 ] [ 24 ]
https://en.wikipedia.org/wiki/Minigene
In probability theory , the minimal-entropy martingale measure (MEMM) is the risk-neutral probability measure that minimises the entropy difference between the objective probability measure, P {\displaystyle P} , and the risk-neutral measure, Q {\displaystyle Q} . In incomplete markets , this is one way of choosing a risk-neutral measure (from the infinite number available) so as to still maintain the no-arbitrage conditions. The MEMM has the advantage that the measure Q {\displaystyle Q} will always be equivalent to the measure P {\displaystyle P} by construction. Another common choice of equivalent martingale measure is the minimal martingale measure, which minimises the variance of the equivalent martingale . For certain situations, the resultant measure Q {\displaystyle Q} will not be equivalent to P {\displaystyle P} . In a finite probability model, for objective probabilities p i {\displaystyle p_{i}} and risk-neutral probabilities q i {\displaystyle q_{i}} then one must minimise the Kullback–Leibler divergence D K L ( Q ‖ P ) = ∑ i = 1 N q i ln ⁡ ( q i p i ) {\displaystyle D_{KL}(Q\|P)=\sum _{i=1}^{N}q_{i}\ln \left({\frac {q_{i}}{p_{i}}}\right)} subject to the requirement that the expected return is r {\displaystyle r} , where r {\displaystyle r} is the risk-free rate.
https://en.wikipedia.org/wiki/Minimal-entropy_martingale_measure
Minimal algebra is an important concept in tame congruence theory, a theory that has been developed by Ralph McKenzie and David Hobby. [ 1 ] A minimal algebra is a finite algebra with more than one element, in which every non-constant unary polynomial is a permutation on its domain. In simpler terms, it’s an algebraic structure where unary operations (those involving a single input) behave like permutations ( bijective mappings ). These algebras provide intriguing connections between mathematical concepts and are classified into different types, including affine, Boolean, lattice, and semilattice types. A polynomial of an algebra is a composition of its basic operations, 0 {\displaystyle 0} -ary operations and the projections. Two algebras are called polynomially equivalent if they have the same universe and precisely the same polynomial operations. A minimal algebra M {\displaystyle \mathbb {M} } falls into one of the following types (P. P. Pálfy) [ 1 ] [ 2 ]
https://en.wikipedia.org/wiki/Minimal_algebra
The minimal genome is a concept which can be defined as the set of genes sufficient for life to exist and propagate under nutrient-rich and stress-free conditions. Alternatively, it may be defined as the gene set supporting life on an axenic cell culture in rich media, and it is thought what makes up the minimal genome will depend on the environmental conditions that the organism inhabits. [ 1 ] This minimal genome concept assumes that genomes can be reduced to a bare minimum, given that they contain many non-essential genes of limited or situational importance to the organism. Therefore, if a collection of all the essential genes were put together, a minimum genome could be created artificially in a stable environment. By adding more genes, the creation of an organism of desired characteristics is possible. The concept of minimal genome arose from the observations that many genes do not appear to be necessary for survival. [ 2 ] [ 3 ] In order to create a new organism a scientist must determine the minimal set of genes required for metabolism and replication . This can be achieved by experimental and computational analysis of the biochemical pathways needed to carry out basic metabolism and reproduction. [ 4 ] A good model for a minimal genome is Mycoplasma genitalium due to its very small genome size. Most genes that are used by this organism are usually considered essential for survival; based on this concept a minimal set of 256 genes has been proposed. [ 5 ] Scientifically, minimal genome projects allow the identification of the most essential genes, and the reduction of genetic complexity, making engineered strains more predictable. [ 6 ] Industrially and agriculturally, they could be used to engineer plants to resist herbicides or harsh environments; bacteria to synthetically produce chemicals; or microbes to produce beneficial bio-products. [ 6 ] Environmentally, they could be a source of clean energy or renewable chemicals, or help in carbon sequestration from the atmosphere. [ 6 ] By one early investigation, the minimal genome of a bacterium should include a virtually complete set of proteins for replication and translation, a transcription apparatus including four subunits of RNA polymerase including the sigma factor rudimentary proteins sufficient for recombination and repair, several chaperone proteins, the capacity for anaerobic metabolism through glycolysis and substrate-level phosphorylation , transamination of glutamyl-tRNA to glutaminyl-tRNA, lipid (but no fatty acid) biosynthesis, eight cofactor enzymes, protein export machinery, and a limited metabolite transport network including membrane ATPases. [ 7 ] Proteins involved in the minimum bacterial genome tend to be substantially more related to proteins found in archaea and eukaryotes compared to the average gene in the bacterial genome more generally indicating a substantial number of universally (or near universally) conserved proteins. The minimal genomes reconstructed on the basis of existing genes does not preclude simpler systems in more primitive cells, such as an RNA world genome which does not have the need for DNA replication machinery, which is otherwise part of the minimal genome of current cells. [ 7 ] The genes which most frequently survive gene loss include those involved in DNA replication, transcription, and translation, although a number of exceptions are known. For example, loss can be frequently seen in subunits of the DNA polymerase holoenzyme and some DNA repair genes. The majority of ribosomal proteins are retained (though some like RpmC are sometimes missing). In some cases, some tRNA synthetases are lost. Gene loss is also seen in genes for components in the cellular envelope, biosynthesis of biomolecules like purine, energy metabolism, and more. [ 8 ] The minimal genome corresponds to small genome sizes, as bacterial genome size correlates with the number of protein-coding genes, typically one gene per kilobase. [ 1 ] Mycoplasma genitalium , with a 580 kb genome and 482 protein-coding genes, is a key model for minimal genomes. [ 9 ] The smallest known genome of a free-living bacterium is 1.3 Mb with ~1100 genes. [ 10 ] However, significantly more reduced genomes are commonly observed in naturally occurring symbiotic and parasitic organisms. Genome reduction driven by mutation and genetic drift in small and asexual populations with biases for gene deletion can be seen in symbionts and parasites, which commonly experience rapid evolution, codon reassignments, biases for AT nucleotide compositions, and elevated levels of protein misfolding which results in a heavy dependence on molecular chaperones to ensure protein functionality. [ 1 ] These effects, which coincide with the proliferation of mobile genetic elements, pseudogenes, genome rearrangements, and chromosomal deletion are best studied and observed in more recently evolved symbionts. [ 11 ] [ 12 ] [ 13 ] The cause for this is that the symbiont or parasite can outsource a usual cellular function to another cell and so, in the absence of needing to carry out this function for itself, subsequently lose its own genes meant to perform this function. The most extreme examples of genome reduction have been found in maternally transmitted endosymbionts which have experienced lengthy coevolution with their hosts and, in the process, lost a substantial amount of their cellular autonomy. Beneficial symbionts have a greater capacity for genome reduction than do parasites, as host co-adaptation allows them to lose additional crucial genes. [ 14 ] Another important distinction between genome reduction in parasites and genome reduction in endosymbionts is that parasites lose both the gene and its associated function, whereas endosymbionts often retain the function of the lost gene since that function is taken over by the host. [ 15 ] For endosymbionts in some lineages, it is possible for the entire genome to be lost. For example, some mitosomes and hydrogenosomes (degenerate versions of the mitochondria known in some organisms) have experienced a total gene loss and have no remaining genes, whereas the human mitochondria still retains some of its genome. The extant genome in the human mitochondrial organelle is 16.6kb in length and contains 37 genes. [ 16 ] Between organisms, the mitochondrial genome can code for between 3 and 67 proteins, with suggestions that the last eukaryotic common ancestor encoded a minimum of 70 genes in its genome. [ 17 ] The smallest known mitochondrial genome is that of Plasmodium falciparum , with a genome size of 6kb containing three protein-coding genes and a few rRNA genes. (On the other hand, the largest known mitochondrial genome is 490kb. [ 18 ] ) Genomes nearly as small can be found in related apicomplexans as well. [ 19 ] On the other hands, the mitochondrial genomes of land plants have expanded to over 200kb with the largest one (at over 11Mb) exceeding the size of the genome of bacteria and even the simplest eukaryotes. [ 20 ] Organelles known as plastids in plants (including chloroplasts , chromoplasts , and leucoplasts ), once free-living cyanobacteria , typically retain longer genomes on the order of 100-200kb with 80-250 genes. [ 21 ] In one analysis of 15 chloroplast genomes, the analyzed chloroplasts had between 60 and 200 genes. Across these chloroplasts, a total of 274 distinct protein-coding genes were identified, and only 44 of them were universally found in all sequenced chloroplast genomes. [ 22 ] Examples of organisms which have experienced genome reduction include species of Buchnera , Chlamydia , Treponema , Mycoplasma , and many others. Comparisons of multiple sequenced genomes of endosymbionts in multiple isolates of the same species and lineages have confirmed that even long-time symbionts are still experiencing ongoing gene loss and transfer to the nucleus. [ 15 ] [ 8 ] Nuclear integrants of mitochondrial or plastid DNA have sometimes been termed "numts" and "nupts" respectively. [ 15 ] A number of symbionts have now been discovered with genomes under 500 kb in length, the majority of them being bacterial symbionts of insects typically from the taxa Pseudomonadota and Bacteroidota . [ 8 ] The parasitic archaea Nanoarchaeum equitans has a genome 491 kb in length. [ 23 ] In 2002, it was found that some species of the genus Buchnera have a reduced genome of only 450 kb in size. [ 24 ] In 2021, the endosymbiont " Candidatus Azoamicus ciliaticola" was found to have a genome 290 kb in length. [ 25 ] The symbiont Zinderia insecticola was found to have a genome of 208 kb in 2010. [ 26 ] In 2006, another endosymbiont Carsonella ruddii was found with a reduced genome 160 kb in length encompassing 182 protein-coding genes. [ 27 ] Surprisingly, it was found that gene loss in Carsonella symbionts is an ongoing process. [ 28 ] Other intermediate stages in gene loss have been observed in other reduced genomes, including the transition of some genes into pseudogenes as a result of accumulating mutations that are not selected against since the host carries out the needed purpose of that gene. [ 8 ] The genome of Candidatus Hodgkinia cicadicola, a symbiont of cicadas, was found to be 144 kb. [ 29 ] In 2011, Tremblaya princeps was found to contain an intracellular endosymbiont with a genome of 139 kb, reduced to the point that even some translation genes had been lost. [ 30 ] In the smallest to date, a 2013 study found some bacterial symbionts of insects with even smaller genomes. Specifically, two leafhopper symbionts contained highly reduced genomes: while Sulcia muelleri had a genome of 190 kb, Nasuia deltocephalinicola had a genome of only 112 kb and contains 137 protein-coding genes. Combined, the genomes of these two symbionts can only synthesize ten amino acids, in addition to some of the machinery involved in DNA replication, transcription, and translation. The genes for ATP synthesis through oxidative phosphorylation have been lost, however. [ 31 ] Viruses and virus-like particles have the smallest genomes in nature. For instance, bacteriophage MS2 consists of only 3569 nucleotides (single-stranded RNA) and encodes just four proteins which overlap to make efficient use of the genome space. [ 32 ] Similarly, among eukaryotic viruses, porcine circoviruses are among the smallest. [ 33 ] They encode only 2–3 open reading frames . Viroids are circular molecules RNA which do not have any protein-coding genes at all, although the RNA molecule itself acts as a ribozyme to help enable its replication. The genome of a viroid is between 200 and 400 nucleotides in length. [ 34 ] This concept arose as a result of a collaborative effort between National Aeronautics and Space Administration (NASA) and two scientists: Harold Morowitz and Mark Tourtellotte. In the 1960s, NASA was searching for extraterrestrial life forms, assuming that if they existed they may be simple creatures. To attract people's attention, Morowitz published about mycoplasmas as being the smallest and simplest self-replicating creatures. NASA and the two scientists grouped together and came up with the idea to assemble a living cell from the components of mycoplasmas. Mycoplasmas were selected as the best candidate for cell reassembly, since they are composed of a minimum set of organelles, such as a plasma membrane, ribosomes and a circular double stranded DNA. Morowitz' major idea was to define the entire machinery of mycoplasmas cell in molecular level. He announced that an international effort would help him accomplish this main objective. By the 1980s, Richard Herrmann's laboratory had fully sequenced and genetically characterized the 800kb genome of M. pneumoniae . Despite the small size of the genome, the process took three years. In 1995, another laboratory from Maryland the Institute for Genomic Research (TIGR) collaborated with the teams of Johns Hopkins and the University of North Carolina. This group chose to sequence the genome of Mycoplasma genitalium , consisting of only 580 kb genome. This was completed in 6 months. Sequencing M. genitalium revealed conserved genes crucial for defining essential life functions in a minimal self-replicating cell, making it a key candidate for the minimal genome project. Finding a minimal set of essential genes is usually done by selective inactivation or deletions of genes and then testing the effect of each under a given set of conditions. The J. Craig Venter institute conducted these types of experiment on M. genitalium and found 382 essential genes. The J.Craig Venter institute later started a project to create a synthetic organism named Mycoplasma laboratorium, using the minimal set genes identified from M. genitalium . [ 9 ] Reconstruction of a minimal genome is possible by using the knowledge of existing genomes via which the sets of genes, essential for living can also be determined. Once the set of essential genetic elements are known, one can proceed to define the key pathways and core-players by modeling simulations and wet lab genome engineering. [ 3 ] As of 1999, the two organisms upon which the ‘minimal gene set for cellular life' have been applied are: Haemophilus influenzae , and M. genitalium . A list of orthologous proteins were compiled in hope that it would contain protein necessary for cell survival, as orthologous analysis determines how two organisms evolved and shed away any non-essential genes. Since H. influenza and M. genitalium are Gram negative and Gram positive bacteria and due to their vast evolution it was expected that these organisms would be enriched genes that were of universal importance. However, 244 detected orthologs discovered contained no parasitism-specific proteins. The conclusion of this analysis was that similar biochemical functions might be performed by non-orthologous proteins. Even when biochemical pathways of these two organisms were mapped, several pathways were present but many were incomplete. Proteins determined to be common between the two organisms were non-orthologous to each other. [ 3 ] Much of the research mainly focuses on the ancestral genome and less on the minimal genome. Studies of these existing genomes have helped determine that orthologous gene found in these two species are not necessarily essential for survival, in fact non-orthologous genes were found to be more important. Also, it was determined that in order for proteins to share same functions they do not need to have same sequence or common three dimensional folds. Distinguishing between orthologs and paralogs and detecting displacements of orthologs have been quiet beneficial in reconstructing evolution and determining the minimal gene set required for a cellular life. Instead, of conducting a strict orthology study, comparing groups of orthologs and occurrence in most clades instead of every species helped encounter genes lost or displaced. Only genomes that have been completely sequenced have enabled in studying orthologs among the group of organisms. Without a fully sequenced genome it would not be possible to determine the essential minimal gene set required for survival. [ 3 ] J. Craig Venter Institute (JCVI) conducted a study to find all the essential genes of M. genitalium through global transposon mutagenesis. As a result, they found that 382 out of 482 protein coding genes were essential. Genes encoding proteins of unknown function constitute 28% of the essential protein coding genes set. Before conducting this study the JCVI had performed another study on the non-essential genes, genes not required for growth, of M.genitalium , where they reported the use of transposon mutagenesis . Despite figuring out the non-essential genes, it is not confirmed that the products that these genes make have any important biological functions. It was only through gene essentiality studies of bacteria that JCVI has been able to compose a hypothetical minimal gene sets. In JCVI's 1999 study among the two organisms, M. genitalium and Mycoplasma pneumoniae they mapped around 2,200 transposon insertion sites and identified 130 putative non-essentials genes in M. genitalium protein coding genes or M. pneumoniae orthologs of M. genitalium genes. In their experiment they grew a set of Tn4001 transformed cells for many weeks and isolated the genomic DNA from these mixture of mutants . Amplicons were sequenced to detect the transposon insertion sites in mycoplasma genomes. Genes that contained the transposon insertions were hypothetical proteins or proteins considered non-essential. Meanwhile, during this process some of the disruptive genes once considered non-essential, after more analyses turned out essential. The reason for this error could have been due to genes being tolerant to the transposon insertions and thus not being disrupted; cells may have contained two copies of the same gene; or gene product was supplied by more than one cell in those mixed pools of mutants. Insertion of transposon in a gene meant it was disturbed, hence non-essential, but because they did not confirm the absence of gene products they mistook all disruptive genes as non-essential genes. The same study of 1999 was later expanded and the updated results were then published in 2005. Some of the disruptive genes thought to be essential were isoleucyl and tyrosyl-tRNA synthetases (MG345 and MG455), DNA replication gene dnaA (MG469), and DNA polymerase III subunit a (MG261). The way they improved this study was by isolating and characterizing M. genitalium Tn4001 insertions in each colony one by one. The individual analyses of each colony showed more results and estimates of essential genes necessary for life. The key improvement they made in this study was isolating and characterizing individual transposon mutants. Previously, they isolated many colonies containing a mixture of mutants. The filter cloning approach helped in separating the mixtures of mutants. Now, they claim completely different sets of non-essential genes. The 130 non-essential genes claimed at first have now reduced to 67. Of the remaining 63 genes 26 genes were only disrupted in M. pneumoniae which means that some M. genitalium orthologs of non-essential M. pneumoniae genes were actually essential. They have now fully identified almost all of the non-essential genes in M. genitalium , the number of gene disruptions based on colonies analyzed reached a plateau as function and they claim a total of 100 non-essential genes out of the 482 protein coding genes in M. genitalium . The ultimate result of this project has now come down to constructing a synthetic organism, Mycoplasma laboratorium based on the 387 protein coding region and 43 structural RNA genes found in M. genitalium . [ 35 ] This project is currently still going on. [ needs update ] Researchers at the JCVI in 2010 successfully created a synthetic bacterial cell that was capable of replicating itself. The team synthesized a 1.08 million base pair chromosome of a modified Mycoplasma mycoides . The synthetic cell is called: Mycoplasma mycoides JCVI-syn1.0. The DNA was designed in a computer, synthesized, and transplanted into a cell from which the original genome had been removed. The original molecules and on-going reaction networks of the recipient cell then used the artificial DNA to generate daughter cells. These daughter cells are of synthetic origin and capable of further replication, solely controlled by the synthetic genome. [ 36 ] The first half of the project took 15 years to complete. The team designed an accurate, digitized genome of M. mycoides . A total of 1,078 cassettes were built, each 1,080 base pairs long. These cassettes were designed in a way that the end of each DNA cassette overlapped by 80 base pairs. The whole assembled genome was transplanted in yeast cells and grown as yeast artificial chromosome. [ 36 ] Based on JCVI's progress in the field of synthetic biology, it is possible that in near future scientists will be able to propagate M. genitalium's genome in the form of naked DNA, into recipient mycoplasmas cells and replace their original genome with a synthetic genome. Since, mycoplasmas have no cell wall, the transfer of a naked DNA into their cell is possible. The only requirement now is the technique to include the synthetic genome of M. genitalium into mycoplasma cells. To some extent this has become possible, the first replicating synthetic cell has already been developed by the JCVI and they are now on to creating their first synthetic life, consisting of minimal number of essential genes. This new breakthrough in synthetic biology will certainly bring in a new approach to understand biology; and this redesigning and prototyping genomes will later become beneficial to biotechnology companies, enabling them to produce synthetic microbes that produce new, cheaper and better bio-products. [ 9 ] A number of projects have attempted to identify the essential genes of a species. This number should approximate the "minimal genome". For instance, the genome of E. coli has been reduced by about 30%, demonstrating that this species can live with much fewer genes than the wild-type genome contains. [ 37 ] The following table contains a list of such minimal genome projects (including the various techniques used). [ 38 ] The number of essential genes is different for each organism. In fact, each organism has a different number of essential genes, depending on which strain (or individual) is tested. In addition, the number depends on the conditions under which an organism is tested. In several bacteria (or other microbes such as yeast) all or most genes have been deleted individually to determine which genes are "essential" for survival. Such tests are usually carried out on rich media which contain all nutrients. However, if all nutrients are provided, genes required for the synthesis of nutrients are not "essential". When cells are grown on minimal media, many more genes are essential as they may be needed to synthesize such nutrients (e.g. vitamins). The numbers provided in the following table typically have been collected using rich media (but consult original references for details). The number of essential genes were collected from the Database of Essential Genes (DEG), [ 50 ] except for B. subtilis , where the data comes from Genome News Network [ 51 ] [ 52 ] The organisms listed in this table have been systematically tested for essential genes. For more information about minimal genome Please refer also to section 'Other Genera' at 'Mycoplasma laboratorium' .
https://en.wikipedia.org/wiki/Minimal_genome
In the branch of abstract algebra known as ring theory , a minimal right ideal of a ring R is a non- zero right ideal which contains no other non-zero right ideal. Likewise, a minimal left ideal is a non-zero left ideal of R containing no other non-zero left ideals of R , and a minimal ideal of R is a non-zero ideal containing no other non-zero two-sided ideal of R ( Isaacs 2009 , p. 190). In other words, minimal right ideals are minimal elements of the partially ordered set (poset) of non-zero right ideals of R ordered by inclusion . The reader is cautioned that outside of this context, some posets of ideals may admit the zero ideal, and so the zero ideal could potentially be a minimal element in that poset. This is the case for the poset of prime ideals of a ring, which may include the zero ideal as a minimal prime ideal . The definition of a minimal right ideal N of a ring R is equivalent to the following conditions: Minimal ideals are the dual notion to maximal ideals . Many standard facts on minimal ideals can be found in standard texts such as ( Anderson & Fuller 1992 ), ( Isaacs 2009 ), ( Lam 2001 ), and ( Lam 1999 ). A non-zero submodule N of a right module M is called a minimal submodule if it contains no other non-zero submodules of M . Equivalently, N is a non-zero submodule of M which is a simple module . This can also be extended to bimodules by calling a non-zero sub-bimodule N a minimal sub-bimodule of M if N contains no other non-zero sub-bimodules. If the module M is taken to be the right R -module R R , then the minimal submodules are exactly the minimal right ideals of R . Likewise, the minimal left ideals of R are precisely the minimal submodules of the left module R R . In the case of two-sided ideals, we see that the minimal ideals of R are exactly the minimal sub-bimodules of the bimodule R R R . Just as with rings, there is no guarantee that minimal submodules exist in a module. Minimal submodules can be used to define the socle of a module .
https://en.wikipedia.org/wiki/Minimal_ideal
The concept of a minimal infective dose ( MID ), also known as the infectious dose , has traditionally been used for infectious microorganisms that contaminate foods. MID was defined as the number of microorganisms ingested (the dose) from which a pathology is observed in the consumer. For example, to cause gastrointestinal disorders , the food must contain more than 100,000 Salmonella per gram or 1000 per gram for salmonellosis . [ 1 ] however, some viruses like DHBV( duck hepatitis B virus) need as low as 9.5 x 10(9) virus per milliliters to cause liver infections [ 2 ] .To know the dose ingested, it is also necessary to know the mass of the portion. This may be calculated using the following formula: where: This formulation has served as a basis for reasoning to establish the maximum concentrations permitted by the microbiological regulatory criteria intended to protect the health of consumers. The concept of a dose-response relationship dates back to as 1493 but its modern usage reaches to the 20th century, [ 3 ] [ 4 ] as quantitative risk assessment matured as a discipline within the field of food safety. An infectious bacterium in a food can cause various effects, such as diarrhea , vomiting , sepsis , meningitis , Guillain-Barré syndrome , and death. Most of the times, as the dose increases, the severity of the pathological effects increases, and a "dose-effect relationship" can often be established. For example, the higher the dose of Salmonella , the more diarrhea occurs soon after ingestion until it reaches to its maximum. However, among people who have ingested the same dose, not all are affected. The proportion of people affected is called the response. The dose-response relationship for a given effect (e.g., diarrhea) is therefore the relationship between the dose and the likelihood of experiencing this effect. When the response is less than about 10%, it is observed that there is a strictly proportional relationship between dose and response: where: The dose-effect relationship and the dose-response relationship should not be confused. The existence of this relation has a first important consequence: the proportionality factor, symbolized by the letter r, corresponds precisely to the probability of the effect considered when the dose is equal to one bacterial cell. As a result, the minimum infective dose is exactly equal to one bacterial cell, deviating from the traditional notion of the MID. Proportionality has a second consequence: when the dose is divided by ten, the probability of observing the effect is also divided by ten. Additionally, it is a relationship without threshold. In industrial practice, everything is done to reduce the probability that a serving contains the bacterium. There is therefore on the market food in which, for example, only one serving in a hundred is contaminated. The probability of the effect considered is then r / 100. If one in ten thousand is contaminated, the probability goes to r / 10,000, and so on. The line representing the relation can be extended towards zero: there is no threshold. If the probability of not being infected when exposed to one bacterium is 1 − r {\displaystyle 1-r} then the probability of not being infected by n bacteria would be ( 1 − r ) n ≈ exp ⁡ ( − n r ) , {\displaystyle (1-r)^{n}\approx \exp(-nr),} so the probability of being infected is 1 − exp ⁡ ( − n r ) . {\displaystyle 1-\exp(-nr).} For readers familiar with the notion of D50 (the dose that causes the effect in 50% of consumers exposed to the hazard), in most cases the following relationship thus applies: To compare the dose-response relationships for different effects caused by the same bacterium, or for the same effect caused by different bacteria, one can directly compare the values of r; also, it can be used to evaluate the efficacy of a drugs such as antibiotics. [ 5 ] However, it may be easier to compare the doses causing the effect in 50% or 1% of consumers. These are values of D1 (dose causing the effect considered in 1% of consumers exposed to the hazard): [ citation needed ] These examples highlight two important things: [ according to whom? ] While consuming a low dose of pathogenic bacterium is associated with a low probability of disease, infection is still possible. This contributes to sporadic cases of food-borne illness in the population. There is no bacterial concentration in food below which a lack of epidemic is guaranteed. Some food-borne bacteria can cause disease by producing toxins , rather than infection like ETEC . Some synthesize a toxin only when their concentration in the food before ingestion exceeds a threshold, such as Staphylococcus aureus and Bacillus cereus . The concept of MID does not apply to them, but there is a concentration below which they do not constitute a danger to the health of the consumer.
https://en.wikipedia.org/wiki/Minimal_infective_dose
Minimal logic , or minimal calculus , is a symbolic logic system originally developed by Ingebrigt Johansson . [ 1 ] It is an intuitionistic and paraconsistent logic , that rejects both the law of the excluded middle as well as the principle of explosion ( ex falso quodlibet ), and therefore holding neither of the following two derivations as valid: where A {\displaystyle A} and B {\displaystyle B} are any propositions. Most constructive logics only reject the former, the law of excluded middle. In classical logic, also the ex falso law or equivalently ¬ A → ( A → B ) {\displaystyle \neg A\to (A\to B)} , is valid. These do not automatically hold in minimal logic. Note that the name minimal logic sometimes also been used to denote logic systems with a restricted number of connectives. Minimal logic is axiomatized over the positive fragment of intuitionistic logic. Both of these logics may be formulated in the language using the same axioms for implication → {\displaystyle \to } , conjunction ∧ {\displaystyle \land } and disjunction ∨ {\displaystyle \lor } as the basic connectives , but minimal logic conventionally adds falsum or absurdity ⊥ {\displaystyle \bot } as part of the language. Other formulations are possible, of course all avoiding explosion. Direct axioms for negation are given below. A desideratum is always the negation introduction law, discussed next. A quick analysis of the valid rules for negation gives a good preview of what this logic, lacking full explosion, can and cannot prove. A natural statement in a language with negation , such as minimal logic, is, for example, the principle of negation introduction , whereby the negation of a statement is proven by assuming the statement and deriving a contradiction. Over minimal logic, the principle is equivalent to for any two propositions. For B {\displaystyle B} taken as the contradiction A ∧ ¬ A {\displaystyle A\land \neg A} itself, this establishes the law of non-contradiction Assuming any C {\displaystyle C} , the introduction rule of the material conditional gives B → C {\displaystyle B\to C} , also when B {\displaystyle B} and C {\displaystyle C} are not relevantly related. With this and implication elimination, the above introduction principle implies i.e. assuming any contradiction, every proposition can be negated. With negation introduction possible in this logic, any contradiction proves any double negation. In turn, if ¬ C {\displaystyle \neg C} is provable for some C {\displaystyle C} , then moreover always C → ¬ ¬ B {\displaystyle C\to \neg \neg B} . Explosion would permit to remove the double negation in the consequents, but this principle is not adopted in minimal logic. With this, many statements of the form C → B {\displaystyle C\to B} also seen equivalent to explosion, over minimal logic. To name but one example, ¬ ( A ∨ ¬ A ) → B {\displaystyle \neg (A\lor \neg A)\to B} . One possible scheme of extending the positive calculus to minimal logic is to treat ¬ B {\displaystyle \neg B} as an implication, in which case the theorems from the constructive implication calculus of a logic carry over to negation statements. To this end, ⊥ {\displaystyle \bot } is introduced as a proposition, not provable unless the system is inconsistent, and negation ¬ B {\displaystyle \neg B} is then treated as an abbreviation for B → ⊥ {\displaystyle B\to \bot } . Constructively, ⊥ {\displaystyle \bot } represents a proposition for which there can be no reason to believe it. Any implication of the form ( A → C ) → ( A → B ) {\displaystyle (A\to C)\to (A\to B)} is equivalent to just C → B {\displaystyle C\to B} . If absurdity is primitive in the logic, the full explosion principle (e.g. in the form with C = ⊥ {\displaystyle C=\bot } in the above) can thus similarly also be stated as ⊥ → B {\displaystyle \bot \to B} . What follows are quick arguments showing which theorems still do hold in minimal logic, often making implicit use of the valid currying rule and the deduction theorem . By implication introduction, C → ( B → C ) {\displaystyle C\to (B\to C)} , and so ⊥ → ( B → ⊥ ) {\displaystyle \bot \to (B\to \bot )} by considering C = ⊥ {\displaystyle C=\bot } , i.e. Likewise, may directly be derived from modus ponens in the propositional form B → ( ( B → C ) → C ) {\displaystyle B\to ((B\to C)\to C)} . Combining this with the valid contraposition principle (see below), also the stability of negated statements follows, A second equivalent to ¬ B {\displaystyle \neg B} follows from Frege's theorem , This in turn implies a valid weak form of consequentia mirabilis , ( ¬ B → ¬ ¬ B ) ↔ ¬ ¬ B {\displaystyle (\neg B\to \neg \neg B)\leftrightarrow \neg \neg B} . In words, this states that a statement cannot be rejected exactly when the negation of the statement implies that it cannot be rejected. The double-negation introduction entails, and in turn is entailed by, the mere special case B = ¬ ¬ A {\displaystyle B=\neg \neg A} in The rest of this section re-derives the first three theorems above, as a special case of a few other stronger valid theorems, each involving two propositional variables. Firstly, as for the adopted principles in the implication calculus, which do not involve negations, the page on Hilbert system presents it through propositional forms of the axioms of law of identity , implication introduction and a variant of modus ponens . Note the equivalence ( B → ( A → C ) ) ↔ ( A → ( B → C ) ) {\displaystyle {\big (}B\to (A\to C){\big )}\leftrightarrow {\big (}A\to (B\to C){\big )}} proven there. For a first derivation, setting C = ⊥ {\displaystyle C=\bot } here at once results in the schema In the intuitionistic Hilbert system, when not introducing ⊥ {\displaystyle \bot } as a constant, this can also be taken as the second negation-characterizing axiom. (The other being explosion.) Now with A {\displaystyle A} taken as ⊥ {\displaystyle \bot } resp. ¬ B {\displaystyle \neg B} , the above indeed shows explosion in the form ⊥ → ¬ B {\displaystyle \bot \to \neg B} resp. B → ¬ ¬ B {\displaystyle B\to \neg \neg B} . Secondly, the double-negation introduction likewise also follows from the mere special case B = ¬ A {\displaystyle B=\neg A} in which is close to both of the above theorems. The general ( ( [ A → C ] → C ) → ( B → C ) ) ↔ ( A → ( B → C ) ) {\displaystyle {\big (}([A\to C]\to C)\to (B\to C){\big )}\leftrightarrow {\big (}A\to (B\to C){\big )}} indeed holds. The special case ( [ ( B → C ) → C ] → C ) ↔ ( B → C ) {\displaystyle {\big (}[(B\to C)\to C]\to C{\big )}\leftrightarrow (B\to C)} is itself a generalization of the stability of negated statements. The latter alternatively also follows from ( [ ( B → C ) → C ] → D ) → ( B → D ) {\displaystyle {\big (}[(B\to C)\to C]\to D{\big )}\to (B\to D)} . Under the Curry-Howard correspondence , the last theorem here may also be justified by the lambda expression λ f ( ( B → C ) → C ) → D . λ b B . f ( λ g B → C . g ( b ) ) {\displaystyle \lambda f^{((B\to C)\to C)\to D}.\ \lambda b^{B}.\ f(\lambda g^{B\to C}.g(b))} , just to state this method for one of the theorems here. Thirdly, from A → ( B ↔ ( A → B ) ) {\displaystyle A\to {\big (}B\leftrightarrow (A\to B){\big )}} and contrapositions, one has This entails ( ¬ ¬ ( A → B ) ) → ( A → ¬ ¬ B ) {\displaystyle {\big (}\neg \neg (A\to B){\big )}\to {\big (}A\to \neg \neg B{\big )}} , from which the double-negation implication follows with B = A {\displaystyle B=A} , also. Lastly, in minimal logic the contraposition may be proven from ( B → A ) → ( ( A → C ) → ( B → C ) ) {\displaystyle (B\to A)\to ((A\to C)\to (B\to C))} . From this one finds that, for any B {\displaystyle B} , one has A → ( ¬ A → ¬ B ) {\displaystyle A\to (\neg A\to \neg B)} . So relatedly, this also proves, like negation introduction, that ( A ∧ ¬ A ) → ¬ B {\displaystyle (A\land \neg A)\to \neg B} . From these, again, also ⊥ → ¬ B {\displaystyle \bot \to \neg B} . The double-negation implication also follows from B = ¬ A {\displaystyle B=\neg A} in A → ( ¬ A → ¬ B ) {\displaystyle A\to (\neg A\to \neg B)} using the weak consequentia mirabilis. Going beyond statements solely in terms of implications, the principles discussed previously can now also be established as theorems: With the definition of negation through ⊥ {\displaystyle \bot } , the modus ponens statement in the form ( A ∧ ( A → C ) ) → C {\displaystyle (A\land (A\to C))\to C} itself specializes to the non-contradiction principle , when considering C = ⊥ {\displaystyle C=\bot } . When negation is an implication, then the curried form of non-contradiction is again A → ¬ ¬ A {\displaystyle A\to \neg \neg A} . Further, negation introduction in the form with a conjunction, spelled out in the previous section, is implied as the mere special case of ( B → ( A ∧ ( A → C ) ) ) → ( B → C ) {\displaystyle {\big (}B\to (A\land (A\to C)){\big )}\to (B\to C)} . In this way, minimal logic can be characterized as a constructive logic just without negation elimination (a.k.a. explosion). With this, most of the common intuitionistic implications involving conjunctions of two propositions can also be obtained, including the currying equivalence. The important equivalence is worth emphasizing. It expressing that those are two equivalent ways of the saying that both A {\displaystyle A} and B {\displaystyle B} imply C {\displaystyle C} . From it, two of the familiar De Morgan's laws are obtained, The third valid De Morgan's law may also be derived. The negation of an excluded middle statement implies its own validity. With reference to the weak variant of consequentia mirabilis above, it thus follows that This result may also be seen as a special case of ( ( B ∨ ( B → C ) ) → C ) → C {\displaystyle {\big (}(B\lor (B\to C))\to C{\big )}\to C} , which follows from ( ( A ∨ B ) → C ) → ( B → C ) {\displaystyle ((A\lor B)\to C)\to (B\to C)} when considering B → C {\displaystyle B\to C} for A {\displaystyle A} . Another, somewhat more specific special case, ( ( A ∨ ⊥ ) → A ) ↔ ( ⊥ → A ) {\displaystyle {\big (}(A\lor \bot )\to A{\big )}\leftrightarrow (\bot \to A)} , already suggests how naive disjunction laws are tied to explosion, a topic discussed in detail further below. Relatedly, for any A {\displaystyle A} , case analysis shows that ( B ∨ A ) ∧ ( A → B ) {\displaystyle (B\lor A)\land (A\to B)} is equivalent to simply B {\displaystyle B} . In particular, ( B ∨ ¬ B ) ∧ ( ¬ B → B ) {\displaystyle (B\lor \neg B)\land (\neg B\to B)} is equivalent to B {\displaystyle B} . Now in intuitonistic logic, even ( B ∨ ¬ B ) ∧ ( ¬ B → ⊥ ) {\displaystyle (B\lor \neg B)\land (\neg B\to \bot )} is equivalent to B {\displaystyle B} , but proof of the forward direction of this depends a case of explosion. What can be salvaged in minimal logic is This implication is to be compared with the full, only intuitionistically provable propositional expression of the disjunctive syllogism . All the above principles can be obtained using theorems from the positive calculus in combination with the constant ⊥ {\displaystyle \bot } . Instead of the formulation with that constant, one may adopt as axioms the contraposition principle ( B → A ) → ( ¬ A → ¬ B ) {\displaystyle (B\to A)\to (\neg A\to \neg B)} , together with the double negation principle B → ¬ ¬ B {\displaystyle B\to \neg \neg B} . This gives an alternative axiomatization of minimal logic over the positive fragment of intuitionistic logic. The tactic of generalizing ¬ A {\displaystyle \neg A} to A → C {\displaystyle A\to C} does not work to prove all classically valid statements involving double negations. In particular, unsurprisingly, the naive generalization of the double negation elimination ¬ ¬ B → B {\displaystyle \neg \neg B\to B} cannot be provable in this way. Indeed whatever A {\displaystyle A} looks like, any schema of the syntactic form ( A → C ) → B {\displaystyle (A\to C)\to B} would be too strong: Considering any true proposition for C {\displaystyle C} makes this equivalent to just B {\displaystyle B} . The proposition ¬ ¬ ( B ∨ ¬ B ) {\displaystyle \neg \neg (B\lor \neg B)} is a theorem of minimal logic, as is ( A ∧ ¬ A ) → ¬ ¬ B {\displaystyle (A\land \neg A)\to \neg \neg B} . Therefore, adopting the full double negation principle ¬ ¬ B → B {\displaystyle \neg \neg B\to B} in minimal logic actually also proves explosion, and so brings the calculus back to classical logic , also skipping all intermediate logics . As seen above, the double negated excluded middle for any proposition is already provable in minimal logic. However, it is worth emphasizing that in the predicate calculus, not even the laws of the strictly stronger intuitionistic logic enable a proof of the double negation of an infinite conjunction of excluded middle statements. Indeed, In turn, the double negation shift schema (DNS) is not valid either, that is Beyond arithmetic , this unprovability allows for the axiomatization of non-classical theories. Excluded middle is often valid in paraconsistent logics, but not so in minimal logic. In minimal logic, it can be shown to be equivalent to consequentia mirabilis . Minimal logic only proves the double-negation of excluded middle, and weak variants of consequentia mirabilis, as used above and as shown in its own article. Minimal logic proves weakening, i.e. allows for implication introduction in the propositional form C → ( B → C ) {\displaystyle C\to (B\to C)} . That principle plays a role in derivations of deduction theorems . The law of identity A → A {\displaystyle A\to A} holds even in very weak logics. With it, in minimal logic weakening can further be used to prove B → ( A → A ) {\displaystyle B\to (A\to A)} . In, e.g., proving weakening, minimal logic is unlike relevance logic . Hence, of course, the logic discussed here is not minimal in a formal sense. Any formula using only ∧ , ∨ , → {\displaystyle \land ,\lor ,\to } is provable in minimal logic if and only if it is provable in intuitionistic logic. But there are also propositional logic statements that are unprovable in minimal logic, but do hold intuitionistically. The principle of explosion is valid in intuitionistic logic and expresses that to derive any and all propositions, one may do this by deriving any absurdity. In minimal logic, this principle does not axiomatically hold for arbitrary propositions. As minimal logic represents only the positive fragment of intuitionistic logic, it is a subsystem of intuitionistic logic and is strictly weaker. Both logics have the disjunction property . With explosion for negated statements, full explosion is equivalent to its special case ( ( ¬ B ) ∧ ¬ ( ¬ B ) ) → B {\displaystyle ((\neg B)\land \neg (\neg B))\to B} . The latter can be phrased as double negation elimination for rejected propositions, ¬ B → ( ¬ ¬ B → B ) {\displaystyle \neg B\to (\neg \neg B\to B)} . Formulated concisely, explosion in intuitionistic logic exactly grants particular cases of the double negation elimination principle that minimal logic does not have. This implication also immediately entails the full disjunctive syllogism as in the next section. Practically, in the intuitionistic context, the principle of explosion enables proving the disjunctive syllogism in the form of a single proposition: This can be read as follows: Given a constructive proof of A ∨ B {\displaystyle A\lor B} and constructive rejection of A {\displaystyle A} , one unconditionally allows for the positive case choice of B {\displaystyle B} , and here not only the double-negation thereof. In this way, the syllogism is an unpacking principle for the disjunction. It can be seen as a formal consequence of explosion and it also implies it. This is because if A ∨ B {\displaystyle A\lor B} was proven by proving B {\displaystyle B} then B {\displaystyle B} is already proven, while if A ∨ B {\displaystyle A\lor B} was proven by proving A {\displaystyle A} , then B {\displaystyle B} also follows, as the intuitionistic system allows for explosion. For example, given a constructive argument that a coin flip resulted in either heads or tails ( A {\displaystyle A} or B {\displaystyle B} ), together with a constructive argument that the outcome was in fact not heads, the proposition encapsulating the syllogism expresses that then this already constitutes an argument that tails occurred. If the intuitionistic logic system is metalogically assumed consistent, the syllogism may be read as saying that a constructive demonstration of A ∨ B {\displaystyle A\lor B} and ¬ A {\displaystyle \neg A} , in the absence of other non-logical axioms demonstrating B {\displaystyle B} , actually contains a demonstration of B {\displaystyle B} . Johansson proves in his article that even if ( ( A ∨ B ) ∧ ¬ A ) → B {\displaystyle ((A\lor B)\land \neg A)\to B} is not a theorem of minimal logic, from the provability of ( A ∨ B ) ∧ ¬ A {\displaystyle (A\lor B)\land \neg A} , the provability of B {\displaystyle B} follows. Thus, this step is what is called an admissible rule of inference. His proof uses Gentzen's sequent calculus for intuitionistic logic. Weak forms of explosion prove the disjunctive syllogism and in the other direction, the instance of the syllogism with A = ¬ B {\displaystyle A=\neg B} reads ( ( B ∨ ¬ B ) ∧ ¬ ¬ B ) → B {\displaystyle {\big (}(B\lor \neg B)\land \neg \neg B{\big )}\to B} and is equivalent to the double negation elimination for propositions for which excluded middle holds As the material conditional grants double-negation elimination for proven propositions, this is again equivalent to double negation elimination for rejected propositions. Finally, note that with explosion, in intuitionistic logic A ∨ ( ⊥ → P ) {\displaystyle A\lor (\bot \to P)} holds trivially for any A {\displaystyle A} . E.g., this disjunction is intuitionistically also provable for A = ⊥ {\displaystyle A=\bot } , a false disjunct (which itself is not provable even in classical logic). Minimal logic in general does not prove either the two disjuncts. The following Heyting arithmetic theorem allows for proofs of existence claims that cannot be proven, by means of this general result, without the explosion principle. The result is essentially a family of simple double negation elimination claims, ∃ {\displaystyle \exists } -sentences binding a computable predicate. Let P {\displaystyle P} be any quantifier-free predicate, and thus decidable for all numbers n {\displaystyle n} , so that excluded middle holds, Then by induction in m {\displaystyle m} , In words: For the numbers n {\displaystyle n} within a finite range up to m {\displaystyle m} , if it can be ruled out that no case is validating, i.e. if it can be ruled out that for every number, say n = a {\displaystyle n=a} , the corresponding proposition P ( a ) {\displaystyle P(a)} will always be disprovable, then this implies that there is some n = b {\displaystyle n=b} among those n {\displaystyle n} 's for which P ( b ) {\displaystyle P(b)} is provable. As with examples discussed previously, a proof of this requires explosion on the antecedent side to obtain propositions without negations. If the proposition is formulated as starting at m = 0 {\displaystyle m=0} , then this initial case already gives a form of explosion from a vacuous clause The next case m = 1 {\displaystyle m=1} states the double negation elimination for a decidable predicate, The m = 2 {\displaystyle m=2} case reads which, as already noted, is equivalent to Both m = 0 {\displaystyle m=0} and m = 1 {\displaystyle m=1} are again cases of double negation elimination for a decidable predicate. Of course, a statement ∃ ( b < m ) . P ( b ) {\displaystyle \exists (b<m).P(b)} for fixed m {\displaystyle m} and P {\displaystyle P} may be provable by other means, using principles of minimal logic. As an aside, the unbounded schema for general decidable predicates is not even intuitionistically provable, see Markov's principle . In this section we mention the system obtained by restricting minimal logic to implication only. Functional programming calculi already foremostly depend on the implication connective, see e.g. the calculus of constructions for a predicate logic framework. The system can be defined by the following sequent rules: Each formula of this restricted minimal logic corresponds to a type in the simply typed lambda calculus , see Curry–Howard correspondence . This implicational fragment of minimal logic is the same as the positive, implicational fragment of intuitionistic logic and, in the type theory context, often also already denoted as "minimal logic". [ 4 ] Absurdity ⊥ {\displaystyle \bot } is used not only in natural deduction , but also in type theoretical formulations under Curry–Howard. In type systems, ⊥ {\displaystyle \bot } is often also introduced as the empty type. The existence of a proof for that proposition then constitutes an inconsistency. In many contexts, ⊥ {\displaystyle \bot } need not be a separate constant in the logic but its role can be replaced with any rejected proposition. For example, it can be defined as a = b {\displaystyle a=b} where a , b {\displaystyle a,b} ought to be distinct. That proposition may then by denoted by the same symbol, ⊥ {\displaystyle \bot } . Such a definition may also be fruitful over plain constructive logic. An example characterization for such ⊥ {\displaystyle \bot } is 0 = 1 {\displaystyle 0=1} in a theory involving natural numbers. Note that assuming ⊥ {\displaystyle \bot } here, any two given numbers can be proven equal. For example, with 1 = 0 {\displaystyle 1=0} follows 7 = 6 + 1 = 6 + 0 = 6 {\displaystyle 7=6+1=6+0=6} . Also note that a proof of this form is possible also in absence of Explosion, the logical axiom. An arithmetic is hence dubbed inconsistent if 0 = 1 {\displaystyle 0=1} can be derived. In a context with this definition, proving 3 4 = 8 {\displaystyle 3^{4}=8} to be false, i.e. ¬ ( 3 4 = 8 ) {\displaystyle \neg (3^{4}=8)} , just means to prove ( 3 4 = 8 ) → ( 0 = 1 ) {\displaystyle (3^{4}=8)\to (0=1)} . We may introduce the notation 3 4 ≠ 8 {\displaystyle 3^{4}\neq 8} to capture the claim as well. And indeed, using arithmetic, 3 4 − 8 73 = 1 {\displaystyle {\tfrac {3^{4}-8}{73}}=1} holds, but ( 3 4 = 8 ) {\displaystyle (3^{4}=8)} also implies 3 4 − 8 73 = 0 {\displaystyle {\tfrac {3^{4}-8}{73}}=0} . So this would imply 1 = 0 {\displaystyle 1=0} and hence we obtain ¬ ( 3 4 = 8 ) {\displaystyle \neg (3^{4}=8)} . QED. There are semantics of minimal logic that mirror the frame-semantics of intuitionistic logic , see the discussion of semantics in paraconsistent logic . Here the valuation functions assigning truth and falsity to propositions can be subject to fewer constraints.
https://en.wikipedia.org/wiki/Minimal_logic
In field theory , a branch of mathematics , the minimal polynomial of an element α of an extension field of a field is, roughly speaking, the polynomial of lowest degree having coefficients in the smaller field, such that α is a root of the polynomial. If the minimal polynomial of α exists, it is unique. The coefficient of the highest-degree term in the polynomial is required to be 1. More formally, a minimal polynomial is defined relative to a field extension E / F and an element of the extension field E / F . The minimal polynomial of an element, if it exists, is a member of F [ x ] , the ring of polynomials in the variable x with coefficients in F . Given an element α of E , let J α be the set of all polynomials f ( x ) in F [ x ] such that f ( α ) = 0 . The element α is called a root or zero of each polynomial in J α More specifically, J α is the kernel of the ring homomorphism from F [ x ] to E which sends polynomials g to their value g ( α ) at the element α . Because it is the kernel of a ring homomorphism, J α is an ideal of the polynomial ring F [ x ]: it is closed under polynomial addition and subtraction (hence containing the zero polynomial), as well as under multiplication by elements of F (which is scalar multiplication if F [ x ] is regarded as a vector space over F ). The zero polynomial, all of whose coefficients are 0, is in every J α since 0 α i = 0 for all α and i . This makes the zero polynomial useless for classifying different values of α into types, so it is excepted. If there are any non-zero polynomials in J α , i.e. if the latter is not the zero ideal, then α is called an algebraic element over F , and there exists a monic polynomial of least degree in J α . This is the minimal polynomial of α with respect to E / F . It is unique and irreducible over F . If the zero polynomial is the only member of J α , then α is called a transcendental element over F and has no minimal polynomial with respect to E / F . Minimal polynomials are useful for constructing and analyzing field extensions. When α is algebraic with minimal polynomial f ( x ) , the smallest field that contains both F and α is isomorphic to the quotient ring F [ x ]/⟨ f ( x )⟩ , where ⟨ f ( x )⟩ is the ideal of F [ x ] generated by f ( x ) . Minimal polynomials are also used to define conjugate elements . Let E / F be a field extension , α an element of E , and F [ x ] the ring of polynomials in x over F . The element α has a minimal polynomial when α is algebraic over F , that is, when f ( α ) = 0 for some non-zero polynomial f ( x ) in F [ x ]. Then the minimal polynomial of α is defined as the monic polynomial of least degree among all polynomials in F [ x ] having α as a root. Throughout this section, let E / F be a field extension over F as above, let α ∈ E be an algebraic element over F and let J α be the ideal of polynomials vanishing on α . The minimal polynomial f of α is unique. To prove this, suppose that f and g are monic polynomials in J α of minimal degree n > 0. We have that r := f − g ∈ J α (because the latter is closed under addition/subtraction) and that m := deg( r ) < n (because the polynomials are monic of the same degree). If r is not zero, then r / c m (writing c m ∈ F for the non-zero coefficient of highest degree in r ) is a monic polynomial of degree m < n such that r / c m ∈ J α (because the latter is closed under multiplication/division by non-zero elements of F ), which contradicts our original assumption of minimality for n . We conclude that 0 = r = f − g , i.e. that f = g . The minimal polynomial f of α is irreducible, i.e. it cannot be factorized as f = gh for two polynomials g and h of strictly lower degree. To prove this, first observe that any factorization f = gh implies that either g ( α ) = 0 or h ( α ) = 0, because f ( α ) = 0 and F is a field (hence also an integral domain ). Choosing both g and h to be of degree strictly lower than f would then contradict the minimality requirement on f , so f must be irreducible. The minimal polynomial f of α generates the ideal J α , i.e. every g in J α can be factorized as g=fh for some h' in F [ x ]. To prove this, it suffices to observe that F [ x ] is a principal ideal domain , because F is a field: this means that every ideal I in F [ x ], J α amongst them, is generated by a single element f . With the exception of the zero ideal I = {0}, the generator f must be non-zero and it must be the unique polynomial of minimal degree, up to a factor in F (because the degree of fg is strictly larger than that of f whenever g is of degree greater than zero). In particular, there is a unique monic generator f , and all generators must be irreducible. When I is chosen to be J α , for α algebraic over F , then the monic generator f is the minimal polynomial of α . Given a Galois field extension L / K {\displaystyle L/K} the minimal polynomial of any α ∈ L {\displaystyle \alpha \in L} not in K {\displaystyle K} can be computed as f ( x ) = ∏ σ ∈ Gal ( L / K ) ( x − σ ( α ) ) {\displaystyle f(x)=\prod _{\sigma \in {\text{Gal}}(L/K)}(x-\sigma (\alpha ))} if α {\displaystyle \alpha } has no stabilizers in the Galois action. Since it is irreducible, which can be deduced by looking at the roots of f ′ {\displaystyle f'} , it is the minimal polynomial. Note that the same kind of formula can be found by replacing G = Gal ( L / K ) {\displaystyle G={\text{Gal}}(L/K)} with G / N {\displaystyle G/N} where N = Stab ( α ) {\displaystyle N={\text{Stab}}(\alpha )} is the stabilizer group of α {\displaystyle \alpha } . For example, if α ∈ K {\displaystyle \alpha \in K} then its stabilizer is G {\displaystyle G} , hence ( x − α ) {\displaystyle (x-\alpha )} is its minimal polynomial. If F = Q , E = R , α = √ 2 , then the minimal polynomial for α is a ( x ) = x 2 − 2. The base field F is important as it determines the possibilities for the coefficients of a ( x ). For instance, if we take F = R , then the minimal polynomial for α = √ 2 is a ( x ) = x − √ 2 . In general, for the quadratic extension given by a square-free d {\displaystyle d} , computing the minimal polynomial of an element a + b d {\displaystyle a+b{\sqrt {d\,}}} can be found using Galois theory. Then f ( x ) = ( x − ( a + b d ) ) ( x − ( a − b d ) ) = x 2 − 2 a x + ( a 2 − b 2 d ) {\displaystyle {\begin{aligned}f(x)&=(x-(a+b{\sqrt {d\,}}))(x-(a-b{\sqrt {d\,}}))\\&=x^{2}-2ax+(a^{2}-b^{2}d)\end{aligned}}} in particular, this implies 2 a ∈ Z {\displaystyle 2a\in \mathbb {Z} } and a 2 − b 2 d ∈ Z {\displaystyle a^{2}-b^{2}d\in \mathbb {Z} } . This can be used to determine O Q ( d ) {\displaystyle {\mathcal {O}}_{\mathbb {Q} ({\sqrt {d\,}}\!\!\!\;\;)}} through a series of relations using modular arithmetic . If α = √ 2 + √ 3 , then the minimal polynomial in Q [ x ] is a ( x ) = x 4 − 10 x 2 + 1 = ( x − √ 2 − √ 3 )( x + √ 2 − √ 3 )( x − √ 2 + √ 3 )( x + √ 2 + √ 3 ). Notice if α = 2 {\displaystyle \alpha ={\sqrt {2}}} then the Galois action on 3 {\displaystyle {\sqrt {3}}} stabilizes α {\displaystyle \alpha } . Hence the minimal polynomial can be found using the quotient group Gal ( Q ( 2 , 3 ) / Q ) / Gal ( Q ( 3 ) / Q ) {\displaystyle {\text{Gal}}(\mathbb {Q} ({\sqrt {2}},{\sqrt {3}})/\mathbb {Q} )/{\text{Gal}}(\mathbb {Q} ({\sqrt {3}})/\mathbb {Q} )} . The minimal polynomials in Q [ x ] of roots of unity are the cyclotomic polynomials . The roots of the minimal polynomial of 2cos(2 π /n) are twice the real part of the primitive roots of unity. The minimal polynomial in Q [ x ] of the sum of the square roots of the first n prime numbers is constructed analogously, and is called a Swinnerton-Dyer polynomial .
https://en.wikipedia.org/wiki/Minimal_polynomial_(field_theory)
In linear algebra , the minimal polynomial μ A of an n × n matrix A over a field F is the monic polynomial P over F of least degree such that P ( A ) = 0 . Any other polynomial Q with Q ( A ) = 0 is a (polynomial) multiple of μ A . The following three statements are equivalent : The multiplicity of a root λ of μ A is the largest power m such that ker(( A − λI n ) m ) strictly contains ker(( A − λI n ) m −1 ) . In other words, increasing the exponent up to m will give ever larger kernels , but further increasing the exponent beyond m will just give the same kernel. If the field F is not algebraically closed , then the minimal and characteristic polynomials need not factor according to their roots (in F ) alone, in other words they may have irreducible polynomial factors of degree greater than 1 . For irreducible polynomials P one has similar equivalences: Like the characteristic polynomial, the minimal polynomial does not depend on the base field. In other words, considering the matrix as one with coefficients in a larger field does not change the minimal polynomial. The reason for this differs from the case with the characteristic polynomial (where it is immediate from the definition of determinants ), namely by the fact that the minimal polynomial is determined by the relations of linear dependence between the powers of A : extending the base field will not introduce any new such relations (nor of course will it remove existing ones). The minimal polynomial is often the same as the characteristic polynomial, but not always. For example, if A is a multiple aI n of the identity matrix , then its minimal polynomial is X − a since the kernel of aI n − A = 0 is already the entire space; on the other hand its characteristic polynomial is ( X − a ) n (the only eigenvalue is a , and the degree of the characteristic polynomial is always equal to the dimension of the space). The minimal polynomial always divides the characteristic polynomial, which is one way of formulating the Cayley–Hamilton theorem (for the case of matrices over a field). Given an endomorphism T on a finite-dimensional vector space V over a field F , let I T be the set defined as where F [ t ] is the space of all polynomials over the field F . I T is a proper ideal of F [ t ] . Since F is a field, F [ t ] is a principal ideal domain , thus any ideal is generated by a single polynomial, which is unique up to a unit in F . A particular choice among the generators can be made, since precisely one of the generators is monic . The minimal polynomial is thus defined to be the monic polynomial that generates I T . It is the monic polynomial of least degree in I T . An endomorphism φ of a finite-dimensional vector space over a field F is diagonalizable if and only if its minimal polynomial factors completely over F into distinct linear factors. The fact that there is only one factor X − λ for every eigenvalue λ means that the generalized eigenspace for λ is the same as the eigenspace for λ : every Jordan block has size 1 . More generally, if φ satisfies a polynomial equation P ( φ ) = 0 where P factors into distinct linear factors over F , then it will be diagonalizable: its minimal polynomial is a divisor of P and therefore also factors into distinct linear factors. In particular one has: These cases can also be proved directly, but the minimal polynomial gives a unified perspective and proof. For a nonzero vector v in V define: This definition satisfies the properties of a proper ideal. Let μ T , v be the monic polynomial which generates it. and for these coefficients one has Define T to be the endomorphism of R 3 with matrix, on the canonical basis , Taking the first canonical basis vector e 1 and its repeated images by T one obtains of which the first three are easily seen to be linearly independent , and therefore span all of R 3 . The last one then necessarily is a linear combination of the first three, in fact so that: This is in fact also the minimal polynomial μ T and the characteristic polynomial χ T : indeed μ T , e 1 divides μ T which divides χ T , and since the first and last are of degree 3 and all are monic, they must all be the same. Another reason is that in general if any polynomial in T annihilates a vector v , then it also annihilates T ⋅ v (just apply T to the equation that says that it annihilates v ), and therefore by iteration it annihilates the entire space generated by the iterated images by T of v ; in the current case we have seen that for v = e 1 that space is all of R 3 , so μ T , e 1 ( T ) = 0 . Indeed one verifies for the full matrix that T 3 + 4 T 2 + T − I 3 is the zero matrix :
https://en.wikipedia.org/wiki/Minimal_polynomial_(linear_algebra)
In number theory , the real parts of the roots of unity are related to one-another by means of the minimal polynomial of 2 cos ⁡ ( 2 π / n ) . {\displaystyle 2\cos(2\pi /n).} The roots of the minimal polynomial are twice the real part of the roots of unity, where the real part of a root of unity is just cos ⁡ ( 2 k π / n ) {\displaystyle \cos \left(2k\pi /n\right)} with k {\displaystyle k} coprime with n . {\displaystyle n.} For an integer n ≥ 1 {\displaystyle n\geq 1} , the minimal polynomial Ψ n ( x ) {\displaystyle \Psi _{n}(x)} of 2 cos ⁡ ( 2 π / n ) {\displaystyle 2\cos(2\pi /n)} is the non-zero integer-coefficient monic polynomial of smallest degree for which Ψ n ( 2 cos ⁡ ( 2 π / n ) ) = 0 {\displaystyle \Psi _{n}\!\left(2\cos(2\pi /n)\right)=0} . For every n , the polynomial Ψ n ( x ) {\displaystyle \Psi _{n}(x)} is monic, has integer coefficients, and is irreducible over the integers and the rational numbers . All its roots are real ; they are the real numbers 2 cos ⁡ ( 2 k π / n ) {\displaystyle 2\cos \left(2k\pi /n\right)} with k {\displaystyle k} coprime with n {\displaystyle n} and either 1 ≤ k < n {\displaystyle 1\leq k<n} or k = n = 1. {\displaystyle k=n=1.} These roots are twice the real parts of the primitive n th roots of unity . The number of integers k {\displaystyle k} relatively prime to n {\displaystyle n} is given by Euler's totient function φ ( n ) ; {\displaystyle \varphi (n);} it follows that the degree of Ψ n ( x ) {\displaystyle \Psi _{n}(x)} is 1 {\displaystyle 1} for n = 1 , 2 {\displaystyle n=1,2} and φ ( n ) / 2 {\displaystyle \varphi (n)/2} for n ≥ 3. {\displaystyle n\geq 3.} The first two polynomials are Ψ 1 ( x ) = x − 2 {\displaystyle \Psi _{1}(x)=x-2} and Ψ 2 ( x ) = x + 2. {\displaystyle \Psi _{2}(x)=x+2.} The polynomials Ψ n ( x ) {\displaystyle \Psi _{n}(x)} are typical examples of irreducible polynomials whose roots are all real and which have a cyclic Galois group . The first few polynomials Ψ n ( x ) {\displaystyle \Psi _{n}(x)} are If n {\displaystyle n} is an odd prime, the polynomial Ψ n ( x ) {\displaystyle \Psi _{n}(x)} can be written in terms of binomial coefficients following a "zigzag path" through Pascal's triangle : Putting n = 2 m + 1 {\displaystyle n=2m+1} and then we have Ψ p ( x ) = χ p ( x ) {\displaystyle \Psi _{p}(x)=\chi _{p}(x)} for primes p {\displaystyle p} . If n {\displaystyle n} is odd but not a prime, the same polynomial χ n ( x ) {\displaystyle \chi _{n}(x)} , as can be expected, is reducible and, corresponding to the structure of the cyclotomic polynomials Φ d ( x ) {\displaystyle \Phi _{d}(x)} reflected by the formula ∏ d ∣ n Φ d ( x ) = x n − 1 {\displaystyle \prod _{d\mid n}\Phi _{d}(x)=x^{n}-1} , turns out to be just the product of all Ψ d ( x ) {\displaystyle \Psi _{d}(x)} for the divisors d > 1 {\displaystyle d>1} of n {\displaystyle n} , including n {\displaystyle n} itself: This means that the Ψ d ( x ) {\displaystyle \Psi _{d}(x)} are exactly the irreducible factors of χ n ( x ) {\displaystyle \chi _{n}(x)} , which allows to easily obtain Ψ d ( x ) {\displaystyle \Psi _{d}(x)} for any odd d {\displaystyle d} , knowing its degree 1 2 φ ( d ) {\displaystyle {\frac {1}{2}}\varphi (d)} . For example, From the below formula in terms of Chebyshev polynomials and the product formula for odd n {\displaystyle n} above, we can derive for even n {\displaystyle n} Independently of this, if n = 2 k {\displaystyle n=2^{k}} is an even prime power, we have for k ≥ 2 {\displaystyle k\geq 2} the recursion (see OEIS : A158982 ) starting with Ψ 4 ( x ) = x {\displaystyle \Psi _{4}(x)=x} . The roots of Ψ n ( x ) {\displaystyle \Psi _{n}(x)} are given by 2 cos ⁡ ( 2 π k n ) {\displaystyle 2\cos \left({\frac {2\pi k}{n}}\right)} , [ 1 ] where 1 ≤ k < n 2 {\displaystyle 1\leq k<{\frac {n}{2}}} and gcd ( k , n ) = 1 {\displaystyle \gcd(k,n)=1} . Since Ψ n ( x ) {\displaystyle \Psi _{n}(x)} is monic, we have Combining this result with the fact that the function cos ⁡ ( x ) {\displaystyle \cos(x)} is even , we find that 2 cos ⁡ ( 2 π k n ) {\displaystyle 2\cos \left({\frac {2\pi k}{n}}\right)} is an algebraic integer for any positive integer n {\displaystyle n} and any integer k {\displaystyle k} . For a positive integer n {\displaystyle n} , let ζ n = exp ⁡ ( 2 π i n ) = cos ⁡ ( 2 π n ) + sin ⁡ ( 2 π n ) i {\displaystyle \zeta _{n}=\exp \left({\frac {2\pi i}{n}}\right)=\cos \left({\frac {2\pi }{n}}\right)+\sin \left({\frac {2\pi }{n}}\right)i} , a primitive n {\displaystyle n} -th root of unity. Then the minimal polynomial of ζ n {\displaystyle \zeta _{n}} is given by the n {\displaystyle n} -th cyclotomic polynomial Φ n ( x ) {\displaystyle \Phi _{n}(x)} . Since ζ n − 1 = cos ⁡ ( 2 π n ) − sin ⁡ ( 2 π n ) i {\displaystyle \zeta _{n}^{-1}=\cos \left({\frac {2\pi }{n}}\right)-\sin \left({\frac {2\pi }{n}}\right)i} , the relation between 2 cos ⁡ ( 2 π n ) {\displaystyle 2\cos \left({\frac {2\pi }{n}}\right)} and ζ n {\displaystyle \zeta _{n}} is given by 2 cos ⁡ ( 2 π n ) = ζ n + ζ n − 1 {\displaystyle 2\cos \left({\frac {2\pi }{n}}\right)=\zeta _{n}+\zeta _{n}^{-1}} . This relation can be exhibited in the following identity proved by Lehmer, which holds for any non-zero complex number z {\displaystyle z} : [ 2 ] In 1993, Watkins and Zeitlin established the following relation between Ψ n ( x ) {\displaystyle \Psi _{n}(x)} and Chebyshev polynomials of the first kind. [ 1 ] If n = 2 s + 1 {\displaystyle n=2s+1} is odd , then [ verification needed ] and if n = 2 s {\displaystyle n=2s} is even , then If n {\displaystyle n} is a power of 2 {\displaystyle 2} , we have moreover directly [ 3 ] The absolute value of the constant coefficient of Ψ n ( x ) {\displaystyle \Psi _{n}(x)} can be determined as follows: [ 4 ] The algebraic number field K n = Q ( ζ n + ζ n − 1 ) {\displaystyle K_{n}=\mathbb {Q} \left(\zeta _{n}+\zeta _{n}^{-1}\right)} is the maximal real subfield of a cyclotomic field Q ( ζ n ) {\displaystyle \mathbb {Q} (\zeta _{n})} . If O K n {\displaystyle {\mathcal {O}}_{K_{n}}} denotes the ring of integers of K n {\displaystyle K_{n}} , then O K n = Z [ ζ n + ζ n − 1 ] {\displaystyle {\mathcal {O}}_{K_{n}}=\mathbb {Z} \left[\zeta _{n}+\zeta _{n}^{-1}\right]} . In other words, the set { 1 , ζ n + ζ n − 1 , … , ( ζ n + ζ n − 1 ) φ ( n ) 2 − 1 } {\displaystyle \left\{1,\zeta _{n}+\zeta _{n}^{-1},\ldots ,\left(\zeta _{n}+\zeta _{n}^{-1}\right)^{{\frac {\varphi (n)}{2}}-1}\right\}} is an integral basis of O K n {\displaystyle {\mathcal {O}}_{K_{n}}} . In view of this, the discriminant of the algebraic number field K n {\displaystyle K_{n}} is equal to the discriminant of the polynomial Ψ n ( x ) {\displaystyle \Psi _{n}(x)} , that is [ 5 ]
https://en.wikipedia.org/wiki/Minimal_polynomial_of_2cos(2pi/n)
Minimally invasive education ( MIE ) is a form of learning in which children operate in unsupervised environments. The methodology arose from an experiment done by Sugata Mitra while at NIIT in 1999, often called The Hole in the Wall , [ 1 ] [ 2 ] which has since gone on to become a significant project with the formation of Hole in the Wall Education Limited (HiWEL) , a cooperative effort between NIIT and the International Finance Corporation , employed in some 300 'learning stations', covering some 300,000 children in India and several African countries. The programme has been feted with the digital opportunity award by WITSA , [ 2 ] and been extensively covered in the media. Professor Mitra, Chief Scientist at NIIT, is credited with proposing and initiating the Hole-in-the-Wall programme. As early as 1982, he had been toying with the idea of unsupervised learning and computers . Finally, in 1999, he decided to test his ideas in the field. On 26 January 1999, Mitra's team carved a "hole in the wall" that separated the NIIT premises from the adjoining slum in Kalkaji, New Delhi. Through this hole, a freely accessible computer was put up for use. This computer proved to be popular among the slum children. With no prior experience, the children learned to use the computer on their own. This prompted Mitra to propose the following hypothesis: [ 3 ] The acquisition of basic computing skills by any set of children can be achieved through incidental learning provided the learners are given access to a suitable computing facility, with entertaining and motivating content and some minimal (human) guidance. In the following comment on the TED website Mitra explains how they saw to it that the computer in this experiment was accessible to children only: Mitra has summarised the results of his experiment as follows. Given free and public access to computers and the Internet, a group of children can The first adopter of the idea was the Government of National Capital Territory of Delhi. In 2000, the Government of Delhi set up 30 Learning Stations in a resettlement colony. This project is ongoing and said to be achieving significant results. Encouraged by the initial success of the Kalkaji experiment, freely accessible computers were set up in Shivpuri (a town in Madhya Pradesh) and in Madantusi (a village in Uttar Pradesh). These experiments came to be known as Hole-in-the-Wall experiments. The findings from Shivpuri and Madantusi confirmed the results of Kalkaji experiments. It appeared that the children in these two places picked up computer skills on their own. Dr. Mitra defined this as a new way of learning "Minimally Invasive Education". At this point in time, International Finance Corporation joined hands with NIIT to set up Hole-in-the-Wall Education Ltd (HiWEL). The idea was to broaden the scope of the experiments and conduct research to prove and streamline Hole-in-the-Wall. The results, [ 6 ] show that children learn to operate as well as play with the computer with minimum intervention. They picked up skills and tasks by constructing their own learning environment. Today, more than 300,000 children have benefited from 300 Hole-in-the-Wall stations over last 8 years. In India Suhotra Banerjee (Head-Government Relations) has increased the reach of HiWEL learning stations in Nagaland, Jharkhand, Andhra Pradesh... and is slowly expanding their numbers. [ 7 ] Besides India, HiWEL also has projects abroad. The first such project was established in Cambodia in 2004. The project currently operates in Botswana, Mozambique, Nigeria, Rwanda, Swaziland, Uganda, and Zambia, besides Cambodia. [ 7 ] The idea, also called Open learning , is even being applied in Britain, albeit inside the classroom. [ 8 ] Hole-in-the-Wall Education Ltd. (HiWEL) is a joint venture between NIIT and the International Finance Corporation . Established in 2001, HiWEL was set up to research and propagate the idea of Hole-in-the-Wall, a path-breaking learning methodology created by Mitra, Chief Scientist of NIIT. [ 2 ] The project has received extensive coverage from sources as diverse as UNESCO , [ 9 ] Business Week , [ 10 ] CNN , Reuters , [ 7 ] and The Christian Science Monitor , [ 11 ] besides being featured at the annual TED conference in 2007. The project received international publicity, when it was found that it was the inspiration behind the book Q & A , itself the inspiration for the Academy Award winning film Slumdog Millionaire . [ 7 ] HiWEL has been covered by the Indian Reader's Digest . [ 12 ] Minimally Invasive Education in school adduces there are many reasons why children may have difficulty learning, especially when the learning is imposed and the subject is something the student is not interested in, a frequent occurrence in modern schools. Schools also label children as "learning disabled" and place them in special education even if the child does not have a learning disability, because the schools have failed to teach the children basic skills. [ 13 ] Minimally Invasive Education in school asserts there are many ways to study and learn. It argues that learning is a process you do, not a process that is done to you. [ 14 ] The experience of schools holding this approach shows that there are many ways to learn without the intervention of teaching , to say, without the intervention of a teacher being imperative. In the case of reading for instance in these schools some children learn from being read to, memorizing the stories and then ultimately reading them. Others learn from cereal boxes, others from games instructions, others from street signs. Some teach themselves letter sounds, others syllables, others whole words. They adduce that in their schools no one child has ever been forced, pushed, urged, cajoled, or bribed into learning how to read or write, and they have had no dyslexia. None of their graduates are real or functional illiterates, and no one who meets their older students could ever guess the age at which they first learned to read or write. [ 15 ] In a similar form students learn all the subjects, techniques and skills in these schools. Every person, children and youth included, has a different learning style and pace and each person, is unique, not only capable of learning but also capable of succeeding. These schools assert that applying the medical model of problem-solving to individual children who are pupils in the school system, and labeling these children as disabled—referring to a whole generation of non-standard children that have been labeled as dysfunctional, even though they suffer from nothing more than the disease of responding differently in the classroom than the average manageable student—systematically prevents the students' success and the improvement of the current educational system, thus requiring the prevention of academic failure through intervention. This, they clarify, does not refer to people who have a specific disability that affects their drives; nor is anything they say and write about education meant to apply to people who have specific mental impairments, which may need to be dealt with in special, clinical ways. Describing current instructional methods as homogenization and lockstep standardization, alternative approaches are proposed, such as the Sudbury model schools , an alternative approach in which children, by enjoying personal freedom thus encouraged to exercise personal responsibility for their actions , learn at their own pace rather than following a chronologically-based curriculum. [ 16 ] [ 17 ] [ 18 ] [ 19 ] These schools are organized to allow freedom from adult interference in the daily lives of students. As long as children do no harm to others, they can do whatever they want with their time in school. The adults in other schools plan a curriculum of study, teach the students the material and then test and grade their learning. The adults at Sudbury schools are "the guardians of the children's freedom to pursue their own interests and to learn what they wish," creating and maintaining a nurturing environment, in which children feel that they are cared for, and that does not rob children of their time to explore and discover their inner selves. They also are there to answer questions and to impart specific skills or knowledge when asked to by students. [ 20 ] [ 21 ] As Sudbury schools, proponents of unschooling have also claimed that children raised in this method do not suffer from learning disabilities, thus not requiring the prevention of academic failure through intervention.
https://en.wikipedia.org/wiki/Minimally_invasive_education
Minimally manipulated cells are non-cultured (non-expanded) cells isolated from the biological material by its grinding, homogenization or selective collection of cells, which undergo minimal manipulation. [ 1 ] Minimally manipulated cells are usually using for the treatment of skin ulceration , alopecia , and arthritis . [ 2 ] [ 3 ] Minimally manipulated cells can be used for the intraoperative creation of tissue-engineered grafts in situ . [ 4 ] Minimally manipulated cells are allowed to be an object of manufacture and homologous transplantation in USA and European Countries. The criteria of "minimal manipulation" are variative in different countries. European regulations, according to the Reflection Paper on the classification of advanced therapy medicinal products of the European Medicines Agency, define "minimal manipulation" as the procedure that does not change biological characteristics and functions of cells. [ 5 ] In particular, enzymatic digestion of biomaterial is prohibited, when cell-to-cell contacts are dissociated. [ citation needed ] According to the US regulations (US 21 Code of Federal Regulations § 1271.3(f)(1), Section 361) human cells and tissues and tissue-based products (section 361 HCT/Ps), “minimal manipulation” is a processing that does not alter the original relevant characteristics of the structural tissue relating to the tissue’s utility for reconstruction, repair, or replacement. [ 6 ] Russian regulations provide no specific definition for “minimally manipulated” cells. However, it follows from the content of the Order of Russian Ministry of Health No. 1158n “On amending the list of transplantation objects”. According to the Order, cells obtained from the biomaterial by its grinding, homogenization, enzymatic treatment, removal of unwanted components or by selective collection of cells, could be considered as “minimally manipulated”. Minimally manipulated cells are allowed to be an object of transplantation, when they do not contain any other substances except for water, crystalloids, sterilizing, storage, and (or) specific preserving agents. [ 7 ]
https://en.wikipedia.org/wiki/Minimally_manipulated_cells
In statistical decision theory , where we are faced with the problem of estimating a deterministic parameter (vector) θ ∈ Θ {\displaystyle \theta \in \Theta } from observations x ∈ X , {\displaystyle x\in {\mathcal {X}},} an estimator (estimation rule) δ M {\displaystyle \delta ^{M}\,\!} is called minimax if its maximal risk is minimal among all estimators of θ {\displaystyle \theta \,\!} . In a sense this means that δ M {\displaystyle \delta ^{M}\,\!} is an estimator which performs best in the worst possible case allowed in the problem. Consider the problem of estimating a deterministic (not Bayesian ) parameter θ ∈ Θ {\displaystyle \theta \in \Theta } from noisy or corrupt data x ∈ X {\displaystyle x\in {\mathcal {X}}} related through the conditional probability distribution P ( x ∣ θ ) {\displaystyle P(x\mid \theta )\,\!} . Our goal is to find a "good" estimator δ ( x ) {\displaystyle \delta (x)\,\!} for estimating the parameter θ {\displaystyle \theta \,\!} , which minimizes some given risk function R ( θ , δ ) {\displaystyle R(\theta ,\delta )\,\!} . Here the risk function (technically a Functional or Operator since R {\displaystyle R} is a function of a function, NOT function composition) is the expectation of some loss function L ( θ , δ ) {\displaystyle L(\theta ,\delta )\,\!} with respect to P ( x ∣ θ ) {\displaystyle P(x\mid \theta )\,\!} . A popular example for a loss function [ 1 ] is the squared error loss L ( θ , δ ) = ‖ θ − δ ‖ 2 {\displaystyle L(\theta ,\delta )=\|\theta -\delta \|^{2}\,\!} , and the risk function for this loss is the mean squared error (MSE). Unfortunately, in general, the risk cannot be minimized since it depends on the unknown parameter θ {\displaystyle \theta \,\!} itself (If we knew what was the actual value of θ {\displaystyle \theta \,\!} , we wouldn't need to estimate it). Therefore additional criteria for finding an optimal estimator in some sense are required. One such criterion is the minimax criterion. Definition : An estimator δ M : X → Θ {\displaystyle \delta ^{M}:{\mathcal {X}}\rightarrow \Theta \,\!} is called minimax with respect to a risk function R ( θ , δ ) {\displaystyle R(\theta ,\delta )\,\!} if it achieves the smallest maximum risk among all estimators, meaning it satisfies Logically, an estimator is minimax when it is the best in the worst case. Continuing this logic, a minimax estimator should be a Bayes estimator with respect to a least favorable prior distribution of θ {\displaystyle \theta \,\!} . To demonstrate this notion denote the average risk of the Bayes estimator δ π {\displaystyle \delta _{\pi }\,\!} with respect to a prior distribution π {\displaystyle \pi \,\!} as Definition: A prior distribution π {\displaystyle \pi \,\!} is called least favorable if for every other distribution π ′ {\displaystyle \pi '\,\!} the average risk satisfies r π ≥ r π ′ {\displaystyle r_{\pi }\geq r_{\pi '}\,} . Theorem 1: If r π = sup θ R ( θ , δ π ) , {\displaystyle r_{\pi }=\sup _{\theta }R(\theta ,\delta _{\pi }),\,} then: Corollary: If a Bayes estimator has constant risk, it is minimax. Note that this is not a necessary condition. Example 1: Unfair coin [ 2 ] [ 3 ] : Consider the problem of estimating the "success" rate of a binomial variable, x ∼ B ( n , θ ) {\displaystyle x\sim B(n,\theta )\,\!} . This may be viewed as estimating the rate at which an unfair coin falls on "heads" or "tails". In this case the Bayes estimator with respect to a Beta -distributed prior, θ ∼ Beta ( n / 2 , n / 2 ) {\displaystyle \theta \sim {\text{Beta}}({\sqrt {n}}/2,{\sqrt {n}}/2)\,} is with constant Bayes risk and, according to the Corollary, is minimax. Definition: A sequence of prior distributions π n {\displaystyle \pi _{n}\,\!} is called least favorable if for any other distribution π ′ {\displaystyle \pi '\,\!} , Theorem 2: If there are a sequence of priors π n {\displaystyle \pi _{n}\,\!} and an estimator δ {\displaystyle \delta \,\!} such that sup θ R ( θ , δ ) = lim n → ∞ r π n {\displaystyle \sup _{\theta }R(\theta ,\delta )=\lim _{n\rightarrow \infty }r_{\pi _{n}}\,\!} , then : Notice that no uniqueness is guaranteed here. For example, the ML estimator from the previous example may be attained as the limit of Bayes estimators with respect to a uniform prior, π n ∼ U [ − n , n ] {\displaystyle \pi _{n}\sim U[-n,n]\,\!} with increasing support and also with respect to a zero-mean normal prior π n ∼ N ( 0 , n σ 2 ) {\displaystyle \pi _{n}\sim N(0,n\sigma ^{2})\,\!} with increasing variance. So neither the resulting ML estimator is unique minimax nor the least favorable prior is unique. Example 2: Consider the problem of estimating the mean of p {\displaystyle p\,\!} dimensional Gaussian random vector, x ∼ N ( θ , I p σ 2 ) {\displaystyle x\sim N(\theta ,I_{p}\sigma ^{2})\,\!} . The maximum likelihood (ML) estimator for θ {\displaystyle \theta \,\!} in this case is simply δ ML = x {\displaystyle \delta _{\text{ML}}=x\,\!} , and its risk is The risk is constant, but the ML estimator is actually not a Bayes estimator, so the Corollary of Theorem 1 does not apply. However, the ML estimator is the limit of the Bayes estimators with respect to the prior sequence π n ∼ N ( 0 , n σ 2 ) {\displaystyle \pi _{n}\sim N(0,n\sigma ^{2})\,\!} , and, hence, indeed minimax according to Theorem 2. Nonetheless, minimaxity does not always imply admissibility . In fact in this example, the ML estimator is known to be inadmissible (not admissible) whenever p > 2 {\displaystyle p>2\,\!} . The famous James–Stein estimator dominates the ML whenever p > 2 {\displaystyle p>2\,\!} . Though both estimators have the same risk p σ 2 {\displaystyle p\sigma ^{2}\,\!} when ‖ θ ‖ → ∞ {\displaystyle \|\theta \|\rightarrow \infty \,\!} , and they are both minimax, the James–Stein estimator has smaller risk for any finite ‖ θ ‖ {\displaystyle \|\theta \|\,\!} . This fact is illustrated in the following figure. In general, it is difficult, often even impossible to determine the minimax estimator. Nonetheless, in many cases, a minimax estimator has been determined. Example 3: Bounded normal mean: When estimating the mean of a normal vector x ∼ N ( θ , I n σ 2 ) {\displaystyle x\sim N(\theta ,I_{n}\sigma ^{2})\,\!} , where it is known that ‖ θ ‖ 2 ≤ M {\displaystyle \|\theta \|^{2}\leq M\,\!} . The Bayes estimator with respect to a prior which is uniformly distributed on the edge of the bounding sphere is known to be minimax whenever M ≤ n {\displaystyle M\leq n\,\!} . The analytical expression for this estimator is where J n ( t ) {\displaystyle J_{n}(t)\,\!} , is the modified Bessel function of the first kind of order n . The difficulty of determining the exact minimax estimator has motivated the study of estimators of asymptotic minimax – an estimator δ ′ {\displaystyle \delta '} is called c {\displaystyle c} -asymptotic (or approximate) minimax if For many estimation problems, especially in the non-parametric estimation setting, various approximate minimax estimators have been established. The design of the approximate minimax estimator is intimately related to the geometry, such as the metric entropy number , of Θ {\displaystyle \Theta } . Sometimes, a minimax estimator may take the form of a randomised decision rule . An example is shown on the left. The parameter space has just two elements and each point on the graph corresponds to the risk of a decision rule: the x-coordinate is the risk when the parameter is θ 1 {\displaystyle \theta _{1}} and the y-coordinate is the risk when the parameter is θ 2 {\displaystyle \theta _{2}} . In this decision problem, the minimax estimator lies on a line segment connecting two deterministic estimators. Choosing δ 1 {\displaystyle \delta _{1}} with probability 1 − p {\displaystyle 1-p} and δ 2 {\displaystyle \delta _{2}} with probability p {\displaystyle p} minimises the supremum risk. Robust optimization is an approach to solve optimization problems under uncertainty in the knowledge of underlying parameters,. [ 4 ] [ 5 ] For instance, the MMSE Bayesian estimation of a parameter requires the knowledge of parameter correlation function. If the knowledge of this correlation function is not perfectly available, a popular minimax robust optimization approach [ 6 ] is to define a set characterizing the uncertainty about the correlation function, and then pursuing a minimax optimization over the uncertainty set and the estimator respectively. Similar minimax optimizations can be pursued to make estimators robust to certain imprecisely known parameters. For instance, a recent study dealing with such techniques in the area of signal processing can be found in. [ 7 ] In R. Fandom Noubiap and W. Seidel (2001) an algorithm for calculating a Gamma-minimax decision rule has been developed, when Gamma is given by a finite number of generalized moment conditions. Such a decision rule minimizes the maximum of the integrals of the risk function with respect to all distributions in Gamma. Gamma-minimax decision rules are of interest in robustness studies in Bayesian statistics .
https://en.wikipedia.org/wiki/Minimax_estimator
The minimum-cost flow problem ( MCFP ) is an optimization and decision problem to find the cheapest possible way of sending a certain amount of flow through a flow network . A typical application of this problem involves finding the best delivery route from a factory to a warehouse where the road network has some capacity and cost associated. The minimum cost flow problem is one of the most fundamental among all flow and circulation problems because most other such problems can be cast as a minimum cost flow problem and also that it can be solved efficiently using the network simplex algorithm . A flow network is a directed graph G = ( V , E ) {\displaystyle G=(V,E)} with a source vertex s ∈ V {\displaystyle s\in V} and a sink vertex t ∈ V {\displaystyle t\in V} , where each edge ( u , v ) ∈ E {\displaystyle (u,v)\in E} has capacity c ( u , v ) > 0 {\displaystyle c(u,v)>0} , flow f ( u , v ) {\displaystyle f(u,v)} and cost a ( u , v ) {\displaystyle a(u,v)} , with most minimum-cost flow algorithms supporting edges with negative costs. The cost of sending this flow along an edge ( u , v ) {\displaystyle (u,v)} is f ( u , v ) ⋅ a ( u , v ) {\displaystyle f(u,v)\cdot a(u,v)} . The problem requires an amount of flow d {\displaystyle d} to be sent from source s {\displaystyle s} to sink t {\displaystyle t} . The definition of the problem is to minimize the total cost of the flow over all edges: with the constraints A variation of this problem is to find a flow which is maximum, but has the lowest cost among the maximum flow solutions. This could be called a minimum-cost maximum-flow problem and is useful for finding minimum cost maximum matchings . With some solutions, finding the minimum cost maximum flow instead is straightforward. If not, one can find the maximum flow by performing a binary search on d {\displaystyle d} . A related problem is the minimum cost circulation problem , which can be used for solving minimum cost flow. The minimum cost circulation problem has no source and sink; instead it has costs and lower and upper bounds on each edge, and seeks flow amounts within the given bounds that balance the flow at each vertex and minimize the sum over edges of cost times flow. Any minimum-cost flow instance can be converted into a minimum cost circulation instance by setting the lower bound on all edges to zero, and then making an extra edge from the sink t {\displaystyle t} to the source s {\displaystyle s} , with capacity c ( t , s ) = d {\displaystyle c(t,s)=d} and lower bound l ( t , s ) = d {\displaystyle l(t,s)=d} , forcing the total flow from s {\displaystyle s} to t {\displaystyle t} to also be d {\displaystyle d} . The following problems are special cases of the minimum cost flow problem (we provide brief sketches of each applicable reduction, in turn): [ 1 ] The minimum cost flow problem can be solved by linear programming , since we optimize a linear function, and all constraints are linear. Apart from that, many combinatorial algorithms exist. [ 1 ] Some of them are generalizations of maximum flow algorithms , others use entirely different approaches. Well-known fundamental algorithms (they have many variations): Given a bipartite graph G = ( A ∪ B , E ) , the goal is to find the maximum cardinality matching in G that has minimum cost. Let w : E → R be a weight function on the edges of E . The minimum weight bipartite matching problem or assignment problem is to find a perfect matching M ⊆ E whose total weight is minimized. The idea is to reduce this problem to a network flow problem. Let G′ = ( V′ = A ∪ B , E′ = E ) . Assign the capacity of all the edges in E′ to 1. Add a source vertex s and connect it to all the vertices in A′ and add a sink vertex t and connect all vertices inside group B′ to this vertex. The capacity of all the new edges is 1 and their costs is 0. It is proved that there is minimum weight perfect bipartite matching in G if and only if there a minimum cost flow in G′ . [ 1 ]
https://en.wikipedia.org/wiki/Minimum-cost_flow_problem
In computational geometry and computer science , the minimum-weight triangulation problem is the problem of finding a triangulation of minimal total edge length. [ 1 ] That is, an input polygon or the convex hull of an input point set must be subdivided into triangles that meet edge-to-edge and vertex-to-vertex, in such a way as to minimize the sum of the perimeters of the triangles. The problem is NP-hard for point set inputs, but may be approximated to any desired degree of accuracy. For polygon inputs, it may be solved exactly in polynomial time. The minimum weight triangulation has also sometimes been called the optimal triangulation . The problem of minimum weight triangulation of a point set was posed by Düppe & Gottschalk (1970) , who suggested its application to the construction of triangulated irregular network models of land countours, and used a greedy heuristic to approximate it. Shamos & Hoey (1975) conjectured that the minimum weight triangulation always coincided with the Delaunay triangulation , but this was quickly disproved by Lloyd (1977) , and indeed Kirkpatrick (1980) showed that the weights of the two triangulations can differ by a linear factor. [ 2 ] The minimum-weight triangulation problem became notorious when Garey & Johnson (1979) included it in a list of open problems in their book on NP-completeness , and many subsequent authors published partial results on it. Finally, Mulzer & Rote (2008) showed it to be NP-hard, and Remy & Steger (2009) showed that accurate approximations to it can be constructed efficiently. The weight of a triangulation of a set of points in the Euclidean plane is defined as the sum of lengths of its edges. Its decision variant is the problem of deciding whether there exists a triangulation of weight less than a given weight; it was proven to be NP-hard by Mulzer & Rote (2008) . Their proof is by reduction from PLANAR-1-IN-3-SAT, a special case of the Boolean satisfiability problem in which a 3-CNF whose graph is planar is accepted when it has a truth assignment that satisfies exactly one literal in each clause . The proof uses complex gadgets , and involves computer assistance to verify the correct behavior of these gadgets. It is not known whether the minimum-weight triangulation decision problem is NP-complete , since this depends on the known open problem whether the sum of radicals may be computed in polynomial time. However, Mulzer and Rote remark that the problem is NP-complete if the edge weights are rounded to integer values. Although NP-hard, the minimum weight triangulation may be constructed in subexponential time by a dynamic programming algorithm that considers all possible simple cycle separators of O ( n ) {\displaystyle O({\sqrt {n}})} points within the triangulation, recursively finds the optimal triangulation on each side of the cycle, and chooses the cycle separator leading to the smallest total weight. The total time for this method is 2 O ( n log ⁡ n ) {\displaystyle 2^{O({\sqrt {n}}\log n)}} . [ 3 ] Several authors have proven results relating the minimum weight triangulation to other triangulations in terms of the approximation ratio , the worst-case ratio of the total edge length of the alternative triangulation to the total length of the minimum weight triangulation. In this vein, it is known that the Delaunay triangulation has an approximation ratio of Θ ( n ) {\displaystyle \Theta (n)} , [ 4 ] and that the greedy triangulation (the triangulation formed by adding edges in order from shortest to longest, at each step including an edge whenever it does not cross an earlier edge) has an approximation ratio of Θ ( n ) {\displaystyle \Theta ({\sqrt {n}})} . [ 5 ] Nevertheless, for randomly distributed point sets, both the Delaunay and greedy triangulations are within a logarithmic factor of the minimum weight. [ 6 ] The hardness result of Mulzer and Rote also implies the NP-hardness of finding an approximate solution with relative approximation error at most O(1/n 2 ). Thus, a fully polynomial approximation scheme for minimum weight triangulation is unlikely. However, a quasi-polynomial approximation scheme is possible: for any constant ε 0, a solution with approximation ratio 1 + ε can be found in quasi-polynomial time exp(O((log n ) 9 ). [ 7 ] Because of the difficulty of finding the exact solutions of the minimum-weight triangulation, many authors have studied heuristics that may in some cases find the solution although they cannot be proven to work in all cases. In particular, much of this research has focused on the problem of finding sets of edges that are guaranteed to belong to the minimum-weight triangulation. If a subgraph of the minimum-weight triangulation found in this way turns out to be connected, then the remaining untriangulated space may be viewed as forming a simple polygon, and the entire triangulation may be found by using dynamic programming to find the optimal triangulation of this polygonal space. The same dynamic programming approach can also be extended to the case that the subgraph has a bounded number of connected components [ 8 ] and the same approach of finding a connected graph and then applying dynamic programming to fill in the polygonal gaps surrounding the graph edges has also been used as part of heuristics for finding low-weight but not minimum-weight triangulations. [ 9 ] The graph formed by connecting two points whenever they are each other's nearest neighbors is necessarily a subgraph of the minimum-weight triangulation. [ 10 ] However, this mutual nearest neighbor graph is a matching , and hence is never connected. A related line of research finds large subgraphs of the minimum-weight triangulation by using circle-based β -skeletons , the geometric graphs formed by including an edge between two points u and v whenever there does not exist a third point w forming an angle uwv greater than some parameter θ. It has been shown that, for sufficiently small values of θ, the graph formed in this way is a subgraph of the minimum-weight triangulation. [ 11 ] However, the choice of θ needed to ensure this is smaller than the angle {{{1}}} for which the β -skeleton is always connected. A more sophisticated technique called the LMT-skeleton was proposed by Dickerson & Montague (1996) . It is formed by an iterative process, in which two sets of edges are maintained, a set of edges known to belong to the minimum-weight triangulation and a set of edges that are candidates to belong to it. Initially, the set of known edges is initialized to the convex hull of the input, and all remaining pairs of vertices form candidate edges. Then, in each iteration of the construction process, candidate edges are removed whenever there is no pair of triangles formed by the remaining edges forming a quadrilateral for which the candidate edge is the shortest diagonal, and candidate edges are moved to the set of known edges when there is no other candidate edge that crosses them. The LMT-skeleton is defined to be the set of known edges produced after this process stops making any more changes. It is guaranteed to be a subgraph of the minimum-weight triangulation, can be constructed efficiently, and in experiments on sets of up to 200 points it was frequently connected. [ 12 ] However it has been shown that on the average for large point sets it has a linear number of connected components. [ 13 ] Other heuristics that have been applied to the minimum weight triangulation problem include genetic algorithms [ 14 ] branch and bound , [ 15 ] and ant colony optimization algorithms . [ 16 ] A polygon triangulation of minimal weight may be constructed in cubic time using the dynamic programming approach, reported independently by Gilbert (1979) and Klincsek (1980) . In this method, the vertices are numbered consecutively around the boundary of the polygon, and for each diagonal from vertex i to vertex j that lies within the polygon, the optimal triangulation is calculated by considering all possible triangles ijk within the polygon, adding the weights of the optimal triangulations below the diagonals ik and jk , and choosing the vertex k that leads to the minimum total weight. That is, if MWT( i , j ) denotes the weight of the minimum-weight triangulation of the polygon below edge ij , then the overall algorithm performs the following steps: After this iteration completes, MWT(1, n ) will store the total weight of the minimum weight triangulation. The triangulation itself may be obtained by recursively searching through this array, starting from MWT(1, n ), at each step choosing the triangle ijk that leads to the minimum value for MWT( i , j ) and recursively searching MWT( i , k ) and MWT( j , k ). Similar dynamic programming methods may also be adapted to point set inputs where all but a constant number of points lie on the convex hull of the input, [ 17 ] and to point sets that lie on a constant number of nested convex polygons or on a constant number of lines no two of which cross within the convex hull. [ 18 ] It is also possible to formulate a version of the point set or polygon triangulation problems in which one is allowed to add Steiner points , extra vertices, in order to reduce the total edge length of the resulting triangulations. In some cases, the resulting triangulation may be shorter than the minimum weight triangulation by as much as a linear factor. It is possible to approximate the minimum weight Steiner triangulation of a point set to within a constant factor of optimal, but the constant factor in the approximation is large. [ 19 ] Related problems that have also been studied include the construction of minimum-weight pseudotriangulations [ 20 ] and the characterization of the graphs of minimum-weight triangulations. [ 21 ]
https://en.wikipedia.org/wiki/Minimum-weight_triangulation
In mathematical statistics , the Kullback–Leibler ( KL ) divergence (also called relative entropy and I-divergence [ 1 ] ), denoted D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} , is a type of statistical distance : a measure of how much a model probability distribution Q is different from a true probability distribution P . [ 2 ] [ 3 ] Mathematically, it is defined as D KL ( P ∥ Q ) = ∑ x ∈ X P ( x ) log ⁡ P ( x ) Q ( x ) . {\displaystyle D_{\text{KL}}(P\parallel Q)=\sum _{x\in {\mathcal {X}}}P(x)\,\log {\frac {P(x)}{Q(x)}}.} A simple interpretation of the KL divergence of P from Q is the expected excess surprise from using Q as a model instead of P when the actual distribution is P . While it is a measure of how different two distributions are and is thus a distance in some sense, it is not actually a metric , which is the most familiar and formal type of distance. In particular, it is not symmetric in the two distributions (in contrast to variation of information ), and does not satisfy the triangle inequality . Instead, in terms of information geometry , it is a type of divergence , [ 4 ] a generalization of squared distance , and for certain classes of distributions (notably an exponential family ), it satisfies a generalized Pythagorean theorem (which applies to squared distances). [ 5 ] Relative entropy is always a non-negative real number , with value 0 if and only if the two distributions in question are identical. It has diverse applications, both theoretical, such as characterizing the relative (Shannon) entropy in information systems, randomness in continuous time-series , and information gain when comparing statistical models of inference ; and practical, such as applied statistics, fluid mechanics , neuroscience , bioinformatics , and machine learning . Consider two probability distributions P and Q . Usually, P represents the data, the observations, or a measured probability distribution. Distribution Q represents instead a theory, a model, a description or an approximation of P . The Kullback–Leibler divergence D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} is then interpreted as the average difference of the number of bits required for encoding samples of P using a code optimized for Q rather than one optimized for P . Note that the roles of P and Q can be reversed in some situations where that is easier to compute, such as with the expectation–maximization algorithm (EM) and evidence lower bound (ELBO) computations. The relative entropy was introduced by Solomon Kullback and Richard Leibler in Kullback & Leibler (1951) as "the mean information for discrimination between H 1 {\displaystyle H_{1}} and H 2 {\displaystyle H_{2}} per observation from μ 1 {\displaystyle \mu _{1}} ", [ 6 ] where one is comparing two probability measures μ 1 , μ 2 {\displaystyle \mu _{1},\mu _{2}} , and H 1 , H 2 {\displaystyle H_{1},H_{2}} are the hypotheses that one is selecting from measure μ 1 , μ 2 {\displaystyle \mu _{1},\mu _{2}} (respectively). They denoted this by I ( 1 : 2 ) {\displaystyle I(1:2)} , and defined the "'divergence' between μ 1 {\displaystyle \mu _{1}} and μ 2 {\displaystyle \mu _{2}} " as the symmetrized quantity J ( 1 , 2 ) = I ( 1 : 2 ) + I ( 2 : 1 ) {\displaystyle J(1,2)=I(1:2)+I(2:1)} , which had already been defined and used by Harold Jeffreys in 1948. [ 7 ] In Kullback (1959) , the symmetrized form is again referred to as the "divergence", and the relative entropies in each direction are referred to as a "directed divergences" between two distributions; [ 8 ] Kullback preferred the term discrimination information . [ 9 ] The term "divergence" is in contrast to a distance (metric), since the symmetrized divergence does not satisfy the triangle inequality. [ 10 ] Numerous references to earlier uses of the symmetrized divergence and to other statistical distances are given in Kullback (1959 , pp. 6–7, §1.3 Divergence). The asymmetric "directed divergence" has come to be known as the Kullback–Leibler divergence, while the symmetrized "divergence" is now referred to as the Jeffreys divergence . For discrete probability distributions P and Q defined on the same sample space , X {\displaystyle {\mathcal {X}}} , the relative entropy from Q to P is defined [ 11 ] to be D KL ( P ∥ Q ) = ∑ x ∈ X P ( x ) log ⁡ P ( x ) Q ( x ) , {\displaystyle D_{\text{KL}}(P\parallel Q)=\sum _{x\in {\mathcal {X}}}P(x)\,\log {\frac {P(x)}{Q(x)}}\,,} which is equivalent to D KL ( P ∥ Q ) = − ∑ x ∈ X P ( x ) log ⁡ Q ( x ) P ( x ) . {\displaystyle D_{\text{KL}}(P\parallel Q)=-\sum _{x\in {\mathcal {X}}}P(x)\,\log {\frac {Q(x)}{P(x)}}\,.} In other words, it is the expectation of the logarithmic difference between the probabilities P and Q , where the expectation is taken using the probabilities P . Relative entropy is only defined in this way if, for all x , Q ( x ) = 0 {\displaystyle Q(x)=0} implies P ( x ) = 0 {\displaystyle P(x)=0} ( absolute continuity ). Otherwise, it is often defined as + ∞ {\displaystyle +\infty } , [ 1 ] but the value + ∞ {\displaystyle \ +\infty \ } is possible even if Q ( x ) ≠ 0 {\displaystyle Q(x)\neq 0} everywhere, [ 12 ] [ 13 ] provided that X {\displaystyle {\mathcal {X}}} is infinite in extent. Analogous comments apply to the continuous and general measure cases defined below. Whenever P ( x ) {\displaystyle P(x)} is zero the contribution of the corresponding term is interpreted as zero because lim x → 0 + x log ⁡ ( x ) = 0 . {\displaystyle \lim _{x\to 0^{+}}x\,\log(x)=0\,.} For distributions P and Q of a continuous random variable , relative entropy is defined to be the integral [ 14 ] D KL ( P ∥ Q ) = ∫ − ∞ ∞ p ( x ) log ⁡ p ( x ) q ( x ) d x , {\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{-\infty }^{\infty }p(x)\,\log {\frac {p(x)}{q(x)}}\,dx\,,} where p and q denote the probability densities of P and Q . More generally, if P and Q are probability measures on a measurable space X , {\displaystyle {\mathcal {X}}\,,} and P is absolutely continuous with respect to Q , then the relative entropy from Q to P is defined as D KL ( P ∥ Q ) = ∫ x ∈ X log ⁡ P ( d x ) Q ( d x ) P ( d x ) , {\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{x\in {\mathcal {X}}}\log {\frac {P(dx)}{Q(dx)}}\,P(dx)\,,} where P ( d x ) Q ( d x ) {\displaystyle {\frac {P(dx)}{Q(dx)}}} is the Radon–Nikodym derivative of P with respect to Q , i.e. the unique Q almost everywhere defined function r on X {\displaystyle {\mathcal {X}}} such that P ( d x ) = r ( x ) Q ( d x ) {\displaystyle P(dx)=r(x)Q(dx)} which exists because P is absolutely continuous with respect to Q . Also we assume the expression on the right-hand side exists. Equivalently (by the chain rule ), this can be written as D KL ( P ∥ Q ) = ∫ x ∈ X P ( d x ) Q ( d x ) log ⁡ P ( d x ) Q ( d x ) Q ( d x ) , {\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{x\in {\mathcal {X}}}{\frac {P(dx)}{Q(dx)}}\ \log {\frac {P(dx)}{Q(dx)}}\ Q(dx)\,,} which is the entropy of P relative to Q . Continuing in this case, if μ {\displaystyle \mu } is any measure on X {\displaystyle {\mathcal {X}}} for which densities p and q with P ( d x ) = p ( x ) μ ( d x ) {\displaystyle P(dx)=p(x)\mu (dx)} and Q ( d x ) = q ( x ) μ ( d x ) {\displaystyle Q(dx)=q(x)\mu (dx)} exist (meaning that P and Q are both absolutely continuous with respect to μ {\displaystyle \mu } ), then the relative entropy from Q to P is given as D KL ( P ∥ Q ) = ∫ x ∈ X p ( x ) log ⁡ p ( x ) q ( x ) μ ( d x ) . {\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{x\in {\mathcal {X}}}p(x)\,\log {\frac {p(x)}{q(x)}}\ \mu (dx)\,.} Note that such a measure μ {\displaystyle \mu } for which densities can be defined always exists, since one can take μ = 1 2 ( P + Q ) {\textstyle \mu ={\frac {1}{2}}\left(P+Q\right)} although in practice it will usually be one that applies in the context like counting measure for discrete distributions, or Lebesgue measure or a convenient variant thereof like Gaussian measure or the uniform measure on the sphere , Haar measure on a Lie group etc. for continuous distributions. The logarithms in these formulae are usually taken to base 2 if information is measured in units of bits , or to base e if information is measured in nats . Most formulas involving relative entropy hold regardless of the base of the logarithm. Various conventions exist for referring to D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} in words. Often it is referred to as the divergence between P and Q , but this fails to convey the fundamental asymmetry in the relation. Sometimes, as in this article, it may be described as the divergence of P from Q or as the divergence from Q to P . This reflects the asymmetry in Bayesian inference , which starts from a prior Q and updates to the posterior P . Another common way to refer to D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} is as the relative entropy of P with respect to Q or the information gain from P over Q . Kullback [ 3 ] gives the following example (Table 2.1, Example 2.1). Let P and Q be the distributions shown in the table and figure. P is the distribution on the left side of the figure, a binomial distribution with N = 2 {\displaystyle N=2} and p = 0.4 {\displaystyle p=0.4} . Q is the distribution on the right side of the figure, a discrete uniform distribution with the three possible outcomes x = 0 , 1 , 2 (i.e. X = { 0 , 1 , 2 } {\displaystyle {\mathcal {X}}=\{0,1,2\}} ), each with probability p = 1 / 3 {\displaystyle p=1/3} . Relative entropies D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} and D KL ( Q ∥ P ) {\displaystyle D_{\text{KL}}(Q\parallel P)} are calculated as follows. This example uses the natural log with base e , designated ln to get results in nats (see units of information ): D KL ( P ∥ Q ) = ∑ x ∈ X P ( x ) ln ⁡ P ( x ) Q ( x ) = 9 25 ln ⁡ 9 / 25 1 / 3 + 12 25 ln ⁡ 12 / 25 1 / 3 + 4 25 ln ⁡ 4 / 25 1 / 3 = 1 25 ( 32 ln ⁡ 2 + 55 ln ⁡ 3 − 50 ln ⁡ 5 ) ≈ 0.0852996 , {\displaystyle {\begin{aligned}D_{\text{KL}}(P\parallel Q)&=\sum _{x\in {\mathcal {X}}}P(x)\,\ln {\frac {P(x)}{Q(x)}}\\&={\frac {9}{25}}\ln {\frac {9/25}{1/3}}+{\frac {12}{25}}\ln {\frac {12/25}{1/3}}+{\frac {4}{25}}\ln {\frac {4/25}{1/3}}\\&={\frac {1}{25}}\left(32\ln 2+55\ln 3-50\ln 5\right)\\&\approx 0.0852996,\end{aligned}}} D KL ( Q ∥ P ) = ∑ x ∈ X Q ( x ) ln ⁡ Q ( x ) P ( x ) = 1 3 ln ⁡ 1 / 3 9 / 25 + 1 3 ln ⁡ 1 / 3 12 / 25 + 1 3 ln ⁡ 1 / 3 4 / 25 = 1 3 ( − 4 ln ⁡ 2 − 6 ln ⁡ 3 + 6 ln ⁡ 5 ) ≈ 0.097455. {\displaystyle {\begin{aligned}D_{\text{KL}}(Q\parallel P)&=\sum _{x\in {\mathcal {X}}}Q(x)\,\ln {\frac {Q(x)}{P(x)}}\\&={\frac {1}{3}}\,\ln {\frac {1/3}{9/25}}+{\frac {1}{3}}\,\ln {\frac {1/3}{12/25}}+{\frac {1}{3}}\,\ln {\frac {1/3}{4/25}}\\&={\frac {1}{3}}\left(-4\ln 2-6\ln 3+6\ln 5\right)\\&\approx 0.097455.\end{aligned}}} In the field of statistics, the Neyman–Pearson lemma states that the most powerful way to distinguish between the two distributions P and Q based on an observation Y (drawn from one of them) is through the log of the ratio of their likelihoods: log ⁡ P ( Y ) − log ⁡ Q ( Y ) {\displaystyle \log P(Y)-\log Q(Y)} . The KL divergence is the expected value of this statistic if Y is actually drawn from P . Kullback motivated the statistic as an expected log likelihood ratio. [ 15 ] In the context of coding theory , D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} can be constructed by measuring the expected number of extra bits required to code samples from P using a code optimized for Q rather than the code optimized for P . In the context of machine learning , D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} is often called the information gain achieved if P would be used instead of Q which is currently used. By analogy with information theory, it is called the relative entropy of P with respect to Q . Expressed in the language of Bayesian inference , D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} is a measure of the information gained by revising one's beliefs from the prior probability distribution Q to the posterior probability distribution P . In other words, it is the amount of information lost when Q is used to approximate P . [ 16 ] In applications, P typically represents the "true" distribution of data, observations, or a precisely calculated theoretical distribution, while Q typically represents a theory, model, description, or approximation of P . In order to find a distribution Q that is closest to P , we can minimize the KL divergence and compute an information projection . While it is a statistical distance , it is not a metric , the most familiar type of distance, but instead it is a divergence . [ 4 ] While metrics are symmetric and generalize linear distance, satisfying the triangle inequality , divergences are asymmetric and generalize squared distance, in some cases satisfying a generalized Pythagorean theorem . In general D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} does not equal D KL ( Q ∥ P ) {\displaystyle D_{\text{KL}}(Q\parallel P)} , and the asymmetry is an important part of the geometry. [ 4 ] The infinitesimal form of relative entropy, specifically its Hessian , gives a metric tensor that equals the Fisher information metric ; see § Fisher information metric . Fisher information metric on the certain probability distribution let determine the natural gradient for information-geometric optimization algorithms. [ 17 ] Its quantum version is Fubini-study metric. [ 18 ] Relative entropy satisfies a generalized Pythagorean theorem for exponential families (geometrically interpreted as dually flat manifolds ), and this allows one to minimize relative entropy by geometric means, for example by information projection and in maximum likelihood estimation . [ 5 ] The relative entropy is the Bregman divergence generated by the negative entropy, but it is also of the form of an f -divergence . For probabilities over a finite alphabet , it is unique in being a member of both of these classes of statistical divergences . The application of Bregman divergence can be found in mirror descent. [ 19 ] Consider a growth-optimizing investor in a fair game with mutually exclusive outcomes (e.g. a “horse race” in which the official odds add up to one). The rate of return expected by such an investor is equal to the relative entropy between the investor's believed probabilities and the official odds. [ 20 ] This is a special case of a much more general connection between financial returns and divergence measures. [ 21 ] Financial risks are connected to D KL {\displaystyle D_{\text{KL}}} via information geometry. [ 22 ] Investors' views, the prevailing market view, and risky scenarios form triangles on the relevant manifold of probability distributions. The shape of the triangles determines key financial risks (both qualitatively and quantitatively). For instance, obtuse triangles in which investors' views and risk scenarios appear on “opposite sides” relative to the market describe negative risks, acute triangles describe positive exposure, and the right-angled situation in the middle corresponds to zero risk. Extending this concept, relative entropy can be hypothetically utilised to identify the behaviour of informed investors, if one takes this to be represented by the magnitude and deviations away from the prior expectations of fund flows, for example. [ 23 ] In information theory, the Kraft–McMillan theorem establishes that any directly decodable coding scheme for coding a message to identify one value x i {\displaystyle x_{i}} out of a set of possibilities X can be seen as representing an implicit probability distribution q ( x i ) = 2 − ℓ i {\displaystyle q(x_{i})=2^{-\ell _{i}}} over X , where ℓ i {\displaystyle \ell _{i}} is the length of the code for x i {\displaystyle x_{i}} in bits. Therefore, relative entropy can be interpreted as the expected extra message-length per datum that must be communicated if a code that is optimal for a given (wrong) distribution Q is used, compared to using a code based on the true distribution P : it is the excess entropy. D KL ( P ∥ Q ) = ∑ x ∈ X p ( x ) log ⁡ 1 q ( x ) − ∑ x ∈ X p ( x ) log ⁡ 1 p ( x ) = H ( P , Q ) − H ( P ) {\displaystyle {\begin{aligned}D_{\text{KL}}(P\parallel Q)&=\sum _{x\in {\mathcal {X}}}p(x)\log {\frac {1}{q(x)}}-\sum _{x\in {\mathcal {X}}}p(x)\log {\frac {1}{p(x)}}\\[5pt]&=\mathrm {H} (P,Q)-\mathrm {H} (P)\end{aligned}}} where H ( P , Q ) {\displaystyle \mathrm {H} (P,Q)} is the cross entropy of Q relative to P and H ( P ) {\displaystyle \mathrm {H} (P)} is the entropy of P (which is the same as the cross-entropy of P with itself). The relative entropy D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} can be thought of geometrically as a statistical distance , a measure of how far the distribution Q is from the distribution P . Geometrically it is a divergence : an asymmetric, generalized form of squared distance. The cross-entropy H ( P , Q ) {\displaystyle H(P,Q)} is itself such a measurement (formally a loss function ), but it cannot be thought of as a distance, since H ( P , P ) =: H ( P ) {\displaystyle H(P,P)=:H(P)} is not zero. This can be fixed by subtracting H ( P ) {\displaystyle H(P)} to make D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} agree more closely with our notion of distance, as the excess loss. The resulting function is asymmetric, and while this can be symmetrized (see § Symmetrised divergence ), the asymmetric form is more useful. See § Interpretations for more on the geometric interpretation. Relative entropy relates to " rate function " in the theory of large deviations . [ 24 ] [ 25 ] Arthur Hobson proved that relative entropy is the only measure of difference between probability distributions that satisfies some desired properties, which are the canonical extension to those appearing in a commonly used characterization of entropy . [ 26 ] Consequently, mutual information is the only measure of mutual dependence that obeys certain related conditions, since it can be defined in terms of Kullback–Leibler divergence . In particular, if P ( d x ) = p ( x ) μ ( d x ) {\displaystyle P(dx)=p(x)\mu (dx)} and Q ( d x ) = q ( x ) μ ( d x ) {\displaystyle Q(dx)=q(x)\mu (dx)} , then p ( x ) = q ( x ) {\displaystyle p(x)=q(x)} μ {\displaystyle \mu } - almost everywhere . The entropy H ( P ) {\displaystyle \mathrm {H} (P)} thus sets a minimum value for the cross-entropy H ( P , Q ) {\displaystyle \mathrm {H} (P,Q)} , the expected number of bits required when using a code based on Q rather than P ; and the Kullback–Leibler divergence therefore represents the expected number of extra bits that must be transmitted to identify a value x drawn from X , if a code is used corresponding to the probability distribution Q , rather than the "true" distribution P . Denote f ( α ) := D KL ( ( 1 − α ) Q + α P ∥ Q ) {\displaystyle f(\alpha ):=D_{\text{KL}}((1-\alpha )Q+\alpha P\parallel Q)} and note that D KL ( P ∥ Q ) = f ( 1 ) {\displaystyle D_{\text{KL}}(P\parallel Q)=f(1)} . The first derivative of f {\displaystyle f} may be derived and evaluated as follows f ′ ( α ) = ∑ x ∈ X ( P ( x ) − Q ( x ) ) ( log ⁡ ( ( 1 − α ) Q ( x ) + α P ( x ) Q ( x ) ) + 1 ) = ∑ x ∈ X ( P ( x ) − Q ( x ) ) log ⁡ ( ( 1 − α ) Q ( x ) + α P ( x ) Q ( x ) ) f ′ ( 0 ) = 0 {\displaystyle {\begin{aligned}f'(\alpha )&=\sum _{x\in {\mathcal {X}}}(P(x)-Q(x))\left(\log \left({\frac {(1-\alpha )Q(x)+\alpha P(x)}{Q(x)}}\right)+1\right)\\&=\sum _{x\in {\mathcal {X}}}(P(x)-Q(x))\log \left({\frac {(1-\alpha )Q(x)+\alpha P(x)}{Q(x)}}\right)\\f'(0)&=0\end{aligned}}} Further derivatives may be derived and evaluated as follows f ″ ( α ) = ∑ x ∈ X ( P ( x ) − Q ( x ) ) 2 ( 1 − α ) Q ( x ) + α P ( x ) f ″ ( 0 ) = ∑ x ∈ X ( P ( x ) − Q ( x ) ) 2 Q ( x ) f ( n ) ( α ) = ( − 1 ) n ( n − 2 ) ! ∑ x ∈ X ( P ( x ) − Q ( x ) ) n ( ( 1 − α ) Q ( x ) + α P ( x ) ) n − 1 f ( n ) ( 0 ) = ( − 1 ) n ( n − 2 ) ! ∑ x ∈ X ( P ( x ) − Q ( x ) ) n Q ( x ) n − 1 {\displaystyle {\begin{aligned}f''(\alpha )&=\sum _{x\in {\mathcal {X}}}{\frac {(P(x)-Q(x))^{2}}{(1-\alpha )Q(x)+\alpha P(x)}}\\f''(0)&=\sum _{x\in {\mathcal {X}}}{\frac {(P(x)-Q(x))^{2}}{Q(x)}}\\f^{(n)}(\alpha )&=(-1)^{n}(n-2)!\sum _{x\in {\mathcal {X}}}{\frac {(P(x)-Q(x))^{n}}{\left((1-\alpha )Q(x)+\alpha P(x)\right)^{n-1}}}\\f^{(n)}(0)&=(-1)^{n}(n-2)!\sum _{x\in {\mathcal {X}}}{\frac {(P(x)-Q(x))^{n}}{Q(x)^{n-1}}}\end{aligned}}} Hence solving for D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} via the Taylor expansion of f {\displaystyle f} about 0 {\displaystyle 0} evaluated at α = 1 {\displaystyle \alpha =1} yields D KL ( P ∥ Q ) = ∑ n = 0 ∞ f ( n ) ( 0 ) n ! = ∑ n = 2 ∞ 1 n ( n − 1 ) ∑ x ∈ X ( Q ( x ) − P ( x ) ) n Q ( x ) n − 1 {\displaystyle {\begin{aligned}D_{\text{KL}}(P\parallel Q)&=\sum _{n=0}^{\infty }{\frac {f^{(n)}(0)}{n!}}\\&=\sum _{n=2}^{\infty }{\frac {1}{n(n-1)}}\sum _{x\in {\mathcal {X}}}{\frac {(Q(x)-P(x))^{n}}{Q(x)^{n-1}}}\end{aligned}}} P ≤ 2 Q {\displaystyle P\leq 2Q} a.s. is a sufficient condition for convergence of the series by the following absolute convergence argument ∑ n = 2 ∞ | 1 n ( n − 1 ) ∑ x ∈ X ( Q ( x ) − P ( x ) ) n Q ( x ) n − 1 | = ∑ n = 2 ∞ 1 n ( n − 1 ) ∑ x ∈ X | Q ( x ) − P ( x ) | | 1 − P ( x ) Q ( x ) | n − 1 ≤ ∑ n = 2 ∞ 1 n ( n − 1 ) ∑ x ∈ X | Q ( x ) − P ( x ) | ≤ ∑ n = 2 ∞ 1 n ( n − 1 ) = 1 {\displaystyle {\begin{aligned}\sum _{n=2}^{\infty }\left\vert {\frac {1}{n(n-1)}}\sum _{x\in {\mathcal {X}}}{\frac {(Q(x)-P(x))^{n}}{Q(x)^{n-1}}}\right\vert &=\sum _{n=2}^{\infty }{\frac {1}{n(n-1)}}\sum _{x\in {\mathcal {X}}}\left\vert Q(x)-P(x)\right\vert \left\vert 1-{\frac {P(x)}{Q(x)}}\right\vert ^{n-1}\\&\leq \sum _{n=2}^{\infty }{\frac {1}{n(n-1)}}\sum _{x\in {\mathcal {X}}}\left\vert Q(x)-P(x)\right\vert \\&\leq \sum _{n=2}^{\infty }{\frac {1}{n(n-1)}}\\&=1\end{aligned}}} P ≤ 2 Q {\displaystyle P\leq 2Q} a.s. is also a necessary condition for convergence of the series by the following proof by contradiction. Assume that P > 2 Q {\displaystyle P>2Q} with measure strictly greater than 0 {\displaystyle 0} . It then follows that there must exist some values ε > 0 {\displaystyle \varepsilon >0} , ρ > 0 {\displaystyle \rho >0} , and U < ∞ {\displaystyle U<\infty } such that P ≥ 2 Q + ε {\displaystyle P\geq 2Q+\varepsilon } and Q ≤ U {\displaystyle Q\leq U} with measure ρ {\displaystyle \rho } . The previous proof of sufficiency demonstrated that the measure 1 − ρ {\displaystyle 1-\rho } component of the series where P ≤ 2 Q {\displaystyle P\leq 2Q} is bounded, so we need only concern ourselves with the behavior of the measure ρ {\displaystyle \rho } component of the series where P ≥ 2 Q + ε {\displaystyle P\geq 2Q+\varepsilon } . The absolute value of the n {\displaystyle n} th term of this component of the series is then lower bounded by 1 n ( n − 1 ) ρ ( 1 + ε U ) n {\displaystyle {\frac {1}{n(n-1)}}\rho \left(1+{\frac {\varepsilon }{U}}\right)^{n}} , which is unbounded as n → ∞ {\displaystyle n\to \infty } , so the series diverges. The following result, due to Donsker and Varadhan, [ 29 ] is known as Donsker and Varadhan's variational formula . Theorem [Duality Formula for Variational Inference] — Let Θ {\displaystyle \Theta } be a set endowed with an appropriate σ {\displaystyle \sigma } -field F {\displaystyle {\mathcal {F}}} , and two probability measures P and Q , which formulate two probability spaces ( Θ , F , P ) {\displaystyle (\Theta ,{\mathcal {F}},P)} and ( Θ , F , Q ) {\displaystyle (\Theta ,{\mathcal {F}},Q)} , with Q ≪ P {\displaystyle Q\ll P} . ( Q ≪ P {\displaystyle Q\ll P} indicates that Q is absolutely continuous with respect to P .) Let h be a real-valued integrable random variable on ( Θ , F , P ) {\displaystyle (\Theta ,{\mathcal {F}},P)} . Then the following equality holds log ⁡ E P [ exp ⁡ h ] = sup Q ≪ P ⁡ { E Q [ h ] − D KL ( Q ∥ P ) } . {\displaystyle \log E_{P}[\exp h]=\operatorname {sup} _{Q\ll P}\{E_{Q}[h]-D_{\text{KL}}(Q\parallel P)\}.} Further, the supremum on the right-hand side is attained if and only if it holds Q ( d θ ) P ( d θ ) = exp ⁡ h ( θ ) E P [ exp ⁡ h ] , {\displaystyle {\frac {Q(d\theta )}{P(d\theta )}}={\frac {\exp h(\theta )}{E_{P}[\exp h]}},} almost surely with respect to probability measure P , where Q ( d θ ) P ( d θ ) {\displaystyle {\frac {Q(d\theta )}{P(d\theta )}}} denotes the Radon-Nikodym derivative of Q with respect to P . For a short proof assuming integrability of exp ⁡ ( h ) {\displaystyle \exp(h)} with respect to P , let Q ∗ {\displaystyle Q^{*}} have P -density exp ⁡ h ( θ ) E P [ exp ⁡ h ] {\displaystyle {\frac {\exp h(\theta )}{E_{P}[\exp h]}}} , i.e. Q ∗ ( d θ ) = exp ⁡ h ( θ ) E P [ exp ⁡ h ] P ( d θ ) {\displaystyle Q^{*}(d\theta )={\frac {\exp h(\theta )}{E_{P}[\exp h]}}P(d\theta )} Then D KL ( Q ∥ Q ∗ ) − D KL ( Q ∥ P ) = − E Q [ h ] + log ⁡ E P [ exp ⁡ h ] . {\displaystyle D_{\text{KL}}(Q\parallel Q^{*})-D_{\text{KL}}(Q\parallel P)=-E_{Q}[h]+\log E_{P}[\exp h].} Therefore, E Q [ h ] − D KL ( Q ∥ P ) = log ⁡ E P [ exp ⁡ h ] − D KL ( Q ∥ Q ∗ ) ≤ log ⁡ E P [ exp ⁡ h ] , {\displaystyle E_{Q}[h]-D_{\text{KL}}(Q\parallel P)=\log E_{P}[\exp h]-D_{\text{KL}}(Q\parallel Q^{*})\leq \log E_{P}[\exp h],} where the last inequality follows from D KL ( Q ∥ Q ∗ ) ≥ 0 {\displaystyle D_{\text{KL}}(Q\parallel Q^{*})\geq 0} , for which equality occurs if and only if Q = Q ∗ {\displaystyle Q=Q^{*}} . The conclusion follows. Suppose that we have two multivariate normal distributions , with means μ 0 , μ 1 {\displaystyle \mu _{0},\mu _{1}} and with (non-singular) covariance matrices Σ 0 , Σ 1 . {\displaystyle \Sigma _{0},\Sigma _{1}.} If the two distributions have the same dimension, k , then the relative entropy between the distributions is as follows: [ 30 ] D KL ( N 0 ∥ N 1 ) = 1 2 [ tr ⁡ ( Σ 1 − 1 Σ 0 ) − k + ( μ 1 − μ 0 ) T Σ 1 − 1 ( μ 1 − μ 0 ) + ln ⁡ det Σ 1 det Σ 0 ] . {\displaystyle D_{\text{KL}}\left({\mathcal {N}}_{0}\parallel {\mathcal {N}}_{1}\right)={\frac {1}{2}}\left[\operatorname {tr} \left(\Sigma _{1}^{-1}\Sigma _{0}\right)-k+\left(\mu _{1}-\mu _{0}\right)^{\mathsf {T}}\Sigma _{1}^{-1}\left(\mu _{1}-\mu _{0}\right)+\ln {\frac {\det \Sigma _{1}}{\det \Sigma _{0}}}\right].} The logarithm in the last term must be taken to base e since all terms apart from the last are base- e logarithms of expressions that are either factors of the density function or otherwise arise naturally. The equation therefore gives a result measured in nats . Dividing the entire expression above by ln ⁡ ( 2 ) {\displaystyle \ln(2)} yields the divergence in bits . In a numerical implementation, it is helpful to express the result in terms of the Cholesky decompositions L 0 , L 1 {\displaystyle L_{0},L_{1}} such that Σ 0 = L 0 L 0 T {\displaystyle \Sigma _{0}=L_{0}L_{0}^{T}} and Σ 1 = L 1 L 1 T {\displaystyle \Sigma _{1}=L_{1}L_{1}^{T}} . Then with M and y solutions to the triangular linear systems L 1 M = L 0 {\displaystyle L_{1}M=L_{0}} , and L 1 y = μ 1 − μ 0 {\displaystyle L_{1}y=\mu _{1}-\mu _{0}} , D KL ( N 0 ∥ N 1 ) = 1 2 ( ∑ i , j = 1 k ( M i j ) 2 − k + | y | 2 + 2 ∑ i = 1 k ln ⁡ ( L 1 ) i i ( L 0 ) i i ) . {\displaystyle D_{\text{KL}}\left({\mathcal {N}}_{0}\parallel {\mathcal {N}}_{1}\right)={\frac {1}{2}}\left(\sum _{i,j=1}^{k}{\left(M_{ij}\right)}^{2}-k+|y|^{2}+2\sum _{i=1}^{k}\ln {\frac {(L_{1})_{ii}}{(L_{0})_{ii}}}\right).} A special case, and a common quantity in variational inference , is the relative entropy between a diagonal multivariate normal, and a standard normal distribution (with zero mean and unit variance): D KL ( N ( ( μ 1 , … , μ k ) T , diag ⁡ ( σ 1 2 , … , σ k 2 ) ) ∥ N ( 0 , I ) ) = 1 2 ∑ i = 1 k [ σ i 2 + μ i 2 − 1 − ln ⁡ ( σ i 2 ) ] . {\displaystyle D_{\text{KL}}\left({\mathcal {N}}\left(\left(\mu _{1},\ldots ,\mu _{k}\right)^{\mathsf {T}},\operatorname {diag} \left(\sigma _{1}^{2},\ldots ,\sigma _{k}^{2}\right)\right)\parallel {\mathcal {N}}\left(\mathbf {0} ,\mathbf {I} \right)\right)={\frac {1}{2}}\sum _{i=1}^{k}\left[\sigma _{i}^{2}+\mu _{i}^{2}-1-\ln \left(\sigma _{i}^{2}\right)\right].} For two univariate normal distributions p and q the above simplifies to [ 31 ] D KL ( p ∥ q ) = log ⁡ σ 1 σ 0 + σ 0 2 + ( μ 0 − μ 1 ) 2 2 σ 1 2 − 1 2 {\displaystyle D_{\text{KL}}\left({\mathcal {p}}\parallel {\mathcal {q}}\right)=\log {\frac {\sigma _{1}}{\sigma _{0}}}+{\frac {\sigma _{0}^{2}+{\left(\mu _{0}-\mu _{1}\right)}^{2}}{2\sigma _{1}^{2}}}-{\frac {1}{2}}} In the case of co-centered normal distributions with k = σ 1 / σ 0 {\displaystyle k=\sigma _{1}/\sigma _{0}} , this simplifies [ 32 ] to: D KL ( p ∥ q ) = log 2 ⁡ k + ( k − 2 − 1 ) / 2 / ln ⁡ ( 2 ) b i t s {\displaystyle D_{\text{KL}}\left({\mathcal {p}}\parallel {\mathcal {q}}\right)=\log _{2}k+(k^{-2}-1)/2/\ln(2)\mathrm {bits} } Consider two uniform distributions, with the support of p = [ A , B ] {\displaystyle p=[A,B]} enclosed within q = [ C , D ] {\displaystyle q=[C,D]} ( C ≤ A < B ≤ D {\displaystyle C\leq A<B\leq D} ). Then the information gain is: D KL ( p ∥ q ) = log ⁡ D − C B − A {\displaystyle D_{\text{KL}}\left({\mathcal {p}}\parallel {\mathcal {q}}\right)=\log {\frac {D-C}{B-A}}} Intuitively, [ 32 ] the information gain to a k times narrower uniform distribution contains log 2 ⁡ k {\displaystyle \log _{2}k} bits. This connects with the use of bits in computing, where log 2 ⁡ k {\displaystyle \log _{2}k} bits would be needed to identify one element of a k long stream. The exponential family of distribution is given by p X ( x | θ ) = h ( x ) exp ⁡ ( θ T T ( x ) − A ( θ ) ) {\displaystyle p_{X}(x|\theta )=h(x)\exp \left(\theta ^{\mathsf {T}}T(x)-A(\theta )\right)} where h ( x ) {\displaystyle h(x)} is reference measure, T ( x ) {\displaystyle T(x)} is sufficient statistics, θ {\displaystyle \theta } is canonical natural parameters, and A ( θ ) {\displaystyle A(\theta )} is the log-partition function. The KL divergence between two distributions p ( x | θ 1 ) {\displaystyle p(x|\theta _{1})} and p ( x | θ 2 ) {\displaystyle p(x|\theta _{2})} is given by [ 33 ] D KL ( θ 1 ∥ θ 2 ) = ( θ 1 − θ 2 ) T μ 1 − A ( θ 1 ) + A ( θ 2 ) {\displaystyle D_{\text{KL}}(\theta _{1}\parallel \theta _{2})={\left(\theta _{1}-\theta _{2}\right)}^{\mathsf {T}}\mu _{1}-A(\theta _{1})+A(\theta _{2})} where μ 1 = E θ 1 [ T ( X ) ] = ∇ A ( θ 1 ) {\displaystyle \mu _{1}=E_{\theta _{1}}[T(X)]=\nabla A(\theta _{1})} is the mean parameter of p ( x | θ 1 ) {\displaystyle p(x|\theta _{1})} . For example, for the Poisson distribution with mean λ {\displaystyle \lambda } , the sufficient statistics T ( x ) = x {\displaystyle T(x)=x} , the natural parameter θ = log ⁡ λ {\displaystyle \theta =\log \lambda } , and log partition function A ( θ ) = e θ {\displaystyle A(\theta )=e^{\theta }} . As such, the divergence between two Poisson distributions with means λ 1 {\displaystyle \lambda _{1}} and λ 2 {\displaystyle \lambda _{2}} is D KL ( λ 1 ∥ λ 2 ) = λ 1 log ⁡ λ 1 λ 2 − λ 1 + λ 2 . {\displaystyle D_{\text{KL}}(\lambda _{1}\parallel \lambda _{2})=\lambda _{1}\log {\frac {\lambda _{1}}{\lambda _{2}}}-\lambda _{1}+\lambda _{2}.} As another example, for a normal distribution with unit variance N ( μ , 1 ) {\displaystyle N(\mu ,1)} , the sufficient statistics T ( x ) = x {\displaystyle T(x)=x} , the natural parameter θ = μ {\displaystyle \theta =\mu } , and log partition function A ( θ ) = μ 2 / 2 {\displaystyle A(\theta )=\mu ^{2}/2} . Thus, the divergence between two normal distributions N ( μ 1 , 1 ) {\displaystyle N(\mu _{1},1)} and N ( μ 2 , 1 ) {\displaystyle N(\mu _{2},1)} is D KL ( μ 1 ∥ μ 2 ) = ( μ 1 − μ 2 ) μ 1 − μ 1 2 2 + μ 2 2 2 = ( μ 2 − μ 1 ) 2 2 . {\displaystyle D_{\text{KL}}(\mu _{1}\parallel \mu _{2})=\left(\mu _{1}-\mu _{2}\right)\mu _{1}-{\frac {\mu _{1}^{2}}{2}}+{\frac {\mu _{2}^{2}}{2}}={\frac {{\left(\mu _{2}-\mu _{1}\right)}^{2}}{2}}.} As final example, the divergence between a normal distribution with unit variance N ( μ , 1 ) {\displaystyle N(\mu ,1)} and a Poisson distribution with mean λ {\displaystyle \lambda } is D KL ( μ ∥ λ ) = ( μ − log ⁡ λ ) μ − μ 2 2 + λ . {\displaystyle D_{\text{KL}}(\mu \parallel \lambda )=(\mu -\log \lambda )\mu -{\frac {\mu ^{2}}{2}}+\lambda .} While relative entropy is a statistical distance , it is not a metric on the space of probability distributions, but instead it is a divergence . [ 4 ] While metrics are symmetric and generalize linear distance, satisfying the triangle inequality , divergences are asymmetric in general and generalize squared distance, in some cases satisfying a generalized Pythagorean theorem . In general D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} does not equal D KL ( Q ∥ P ) {\displaystyle D_{\text{KL}}(Q\parallel P)} , and while this can be symmetrized (see § Symmetrised divergence ), the asymmetry is an important part of the geometry. [ 4 ] It generates a topology on the space of probability distributions . More concretely, if { P 1 , P 2 , … } {\displaystyle \{P_{1},P_{2},\ldots \}} is a sequence of distributions such that lim n → ∞ D KL ( P n ∥ Q ) = 0 , {\displaystyle \lim _{n\to \infty }D_{\text{KL}}(P_{n}\parallel Q)=0,} then it is said that P n → D Q . {\displaystyle P_{n}\xrightarrow {D} \,Q.} Pinsker's inequality entails that P n → D P ⇒ P n → T V P , {\displaystyle P_{n}\xrightarrow {D} P\Rightarrow P_{n}\xrightarrow {TV} P,} where the latter stands for the usual convergence in total variation . Relative entropy is directly related to the Fisher information metric . This can be made explicit as follows. Assume that the probability distributions P and Q are both parameterized by some (possibly multi-dimensional) parameter θ {\displaystyle \theta } . Consider then two close by values of P = P ( θ ) {\displaystyle P=P(\theta )} and Q = P ( θ 0 ) {\displaystyle Q=P(\theta _{0})} so that the parameter θ {\displaystyle \theta } differs by only a small amount from the parameter value θ 0 {\displaystyle \theta _{0}} . Specifically, up to first order one has (using the Einstein summation convention ) P ( θ ) = P ( θ 0 ) + Δ θ j P j ( θ 0 ) + ⋯ {\displaystyle P(\theta )=P(\theta _{0})+\Delta \theta _{j}\,P_{j}(\theta _{0})+\cdots } with Δ θ j = ( θ − θ 0 ) j {\displaystyle \Delta \theta _{j}=(\theta -\theta _{0})_{j}} a small change of θ {\displaystyle \theta } in the j direction, and P j ( θ 0 ) = ∂ P ∂ θ j ( θ 0 ) {\displaystyle P_{j}\left(\theta _{0}\right)={\frac {\partial P}{\partial \theta _{j}}}(\theta _{0})} the corresponding rate of change in the probability distribution. Since relative entropy has an absolute minimum 0 for P = Q {\displaystyle P=Q} , i.e. θ = θ 0 {\displaystyle \theta =\theta _{0}} , it changes only to second order in the small parameters Δ θ j {\displaystyle \Delta \theta _{j}} . More formally, as for any minimum, the first derivatives of the divergence vanish ∂ ∂ θ j | θ = θ 0 D KL ( P ( θ ) ∥ P ( θ 0 ) ) = 0 , {\displaystyle \left.{\frac {\partial }{\partial \theta _{j}}}\right|_{\theta =\theta _{0}}D_{\text{KL}}(P(\theta )\parallel P(\theta _{0}))=0,} and by the Taylor expansion one has up to second order D KL ( P ( θ ) ∥ P ( θ 0 ) ) = 1 2 Δ θ j Δ θ k g j k ( θ 0 ) + ⋯ {\displaystyle D_{\text{KL}}(P(\theta )\parallel P(\theta _{0}))={\frac {1}{2}}\,\Delta \theta _{j}\,\Delta \theta _{k}\,g_{jk}(\theta _{0})+\cdots } where the Hessian matrix of the divergence g j k ( θ 0 ) = ∂ 2 ∂ θ j ∂ θ k | θ = θ 0 D KL ( P ( θ ) ∥ P ( θ 0 ) ) {\displaystyle g_{jk}(\theta _{0})=\left.{\frac {\partial ^{2}}{\partial \theta _{j}\,\partial \theta _{k}}}\right|_{\theta =\theta _{0}}D_{\text{KL}}(P(\theta )\parallel P(\theta _{0}))} must be positive semidefinite . Letting θ 0 {\displaystyle \theta _{0}} vary (and dropping the subindex 0) the Hessian g j k ( θ ) {\displaystyle g_{jk}(\theta )} defines a (possibly degenerate) Riemannian metric on the θ parameter space, called the Fisher information metric. When p ( x , ρ ) {\displaystyle p_{(x,\rho )}} satisfies the following regularity conditions: ∂ log ⁡ ( p ) ∂ ρ , ∂ 2 log ⁡ ( p ) ∂ ρ 2 , ∂ 3 log ⁡ ( p ) ∂ ρ 3 {\displaystyle {\frac {\partial \log(p)}{\partial \rho }},{\frac {\partial ^{2}\log(p)}{\partial \rho ^{2}}},{\frac {\partial ^{3}\log(p)}{\partial \rho ^{3}}}} exist, | ∂ p ∂ ρ | < F ( x ) : ∫ x = 0 ∞ F ( x ) d x < ∞ , | ∂ 2 p ∂ ρ 2 | < G ( x ) : ∫ x = 0 ∞ G ( x ) d x < ∞ | ∂ 3 log ⁡ ( p ) ∂ ρ 3 | < H ( x ) : ∫ x = 0 ∞ p ( x , 0 ) H ( x ) d x < ξ < ∞ {\displaystyle {\begin{aligned}\left|{\frac {\partial p}{\partial \rho }}\right|&<F(x):\int _{x=0}^{\infty }F(x)\,dx<\infty ,\\\left|{\frac {\partial ^{2}p}{\partial \rho ^{2}}}\right|&<G(x):\int _{x=0}^{\infty }G(x)\,dx<\infty \\\left|{\frac {\partial ^{3}\log(p)}{\partial \rho ^{3}}}\right|&<H(x):\int _{x=0}^{\infty }p(x,0)H(x)\,dx<\xi <\infty \end{aligned}}} where ξ is independent of ρ ∫ x = 0 ∞ ∂ p ( x , ρ ) ∂ ρ | ρ = 0 d x = ∫ x = 0 ∞ ∂ 2 p ( x , ρ ) ∂ ρ 2 | ρ = 0 d x = 0 {\displaystyle \left.\int _{x=0}^{\infty }{\frac {\partial p(x,\rho )}{\partial \rho }}\right|_{\rho =0}\,dx=\left.\int _{x=0}^{\infty }{\frac {\partial ^{2}p(x,\rho )}{\partial \rho ^{2}}}\right|_{\rho =0}\,dx=0} then: D ( p ( x , 0 ) ∥ p ( x , ρ ) ) = c ρ 2 2 + O ( ρ 3 ) as ρ → 0. {\displaystyle {\mathcal {D}}(p(x,0)\parallel p(x,\rho ))={\frac {c\rho ^{2}}{2}}+{\mathcal {O}}\left(\rho ^{3}\right){\text{ as }}\rho \to 0.} Another information-theoretic metric is variation of information , which is roughly a symmetrization of conditional entropy . It is a metric on the set of partitions of a discrete probability space . MAUVE is a measure of the statistical gap between two text distributions, such as the difference between text generated by a model and human-written text. This measure is computed using Kullback–Leibler divergences between the two distributions in a quantized embedding space of a foundation model. Many of the other quantities of information theory can be interpreted as applications of relative entropy to specific cases. The self-information , also known as the information content of a signal, random variable, or event is defined as the negative logarithm of the probability of the given outcome occurring. When applied to a discrete random variable , the self-information can be represented as [ citation needed ] I ⁡ ( m ) = D KL ( δ im ∥ { p i } ) , {\displaystyle \operatorname {\operatorname {I} } (m)=D_{\text{KL}}\left(\delta _{\text{im}}\parallel \{p_{i}\}\right),} is the relative entropy of the probability distribution P ( i ) {\displaystyle P(i)} from a Kronecker delta representing certainty that i = m {\displaystyle i=m} — i.e. the number of extra bits that must be transmitted to identify i if only the probability distribution P ( i ) {\displaystyle P(i)} is available to the receiver, not the fact that i = m {\displaystyle i=m} . The mutual information , I ⁡ ( X ; Y ) = D KL ( P ( X , Y ) ∥ P ( X ) P ( Y ) ) = E X ⁡ { D KL ( P ( Y ∣ X ) ∥ P ( Y ) ) } = E Y ⁡ { D KL ( P ( X ∣ Y ) ∥ P ( X ) ) } {\displaystyle {\begin{aligned}\operatorname {I} (X;Y)&=D_{\text{KL}}(P(X,Y)\parallel P(X)P(Y))\\[5pt]&=\operatorname {E} _{X}\{D_{\text{KL}}(P(Y\mid X)\parallel P(Y))\}\\[5pt]&=\operatorname {E} _{Y}\{D_{\text{KL}}(P(X\mid Y)\parallel P(X))\}\end{aligned}}} is the relative entropy of the joint probability distribution P ( X , Y ) {\displaystyle P(X,Y)} from the product P ( X ) P ( Y ) {\displaystyle P(X)P(Y)} of the two marginal probability distributions — i.e. the expected number of extra bits that must be transmitted to identify X and Y if they are coded using only their marginal distributions instead of the joint distribution. Equivalently, if the joint probability P ( X , Y ) {\displaystyle P(X,Y)} is known, it is the expected number of extra bits that must on average be sent to identify Y if the value of X is not already known to the receiver. The Shannon entropy , H ( X ) = E ⁡ [ I X ⁡ ( x ) ] = log ⁡ N − D KL ( p X ( x ) ∥ P U ( X ) ) {\displaystyle {\begin{aligned}\mathrm {H} (X)&=\operatorname {E} \left[\operatorname {I} _{X}(x)\right]\\&=\log N-D_{\text{KL}}{\left(p_{X}(x)\parallel P_{U}(X)\right)}\end{aligned}}} is the number of bits which would have to be transmitted to identify X from N equally likely possibilities, less the relative entropy of the uniform distribution on the random variates of X , P U ( X ) {\displaystyle P_{U}(X)} , from the true distribution P ( X ) {\displaystyle P(X)} — i.e. less the expected number of bits saved, which would have had to be sent if the value of X were coded according to the uniform distribution P U ( X ) {\displaystyle P_{U}(X)} rather than the true distribution P ( X ) {\displaystyle P(X)} . This definition of Shannon entropy forms the basis of E.T. Jaynes 's alternative generalization to continuous distributions, the limiting density of discrete points (as opposed to the usual differential entropy ), which defines the continuous entropy as lim N → ∞ H N ( X ) = log ⁡ N − ∫ p ( x ) log ⁡ p ( x ) m ( x ) d x , {\displaystyle \lim _{N\to \infty }H_{N}(X)=\log N-\int p(x)\log {\frac {p(x)}{m(x)}}\,dx,} which is equivalent to: log ⁡ ( N ) − D KL ( p ( x ) | | m ( x ) ) {\displaystyle \log(N)-D_{\text{KL}}(p(x)||m(x))} The conditional entropy [ 34 ] , H ( X ∣ Y ) = log ⁡ N − D KL ( P ( X , Y ) ∥ P U ( X ) P ( Y ) ) = log ⁡ N − D KL ( P ( X , Y ) ∥ P ( X ) P ( Y ) ) − D KL ( P ( X ) ∥ P U ( X ) ) = H ( X ) − I ⁡ ( X ; Y ) = log ⁡ N − E Y ⁡ [ D KL ( P ( X ∣ Y ) ∥ P U ( X ) ) ] {\displaystyle {\begin{aligned}\mathrm {H} (X\mid Y)&=\log N-D_{\text{KL}}(P(X,Y)\parallel P_{U}(X)P(Y))\\[5pt]&=\log N-D_{\text{KL}}(P(X,Y)\parallel P(X)P(Y))-D_{\text{KL}}(P(X)\parallel P_{U}(X))\\[5pt]&=\mathrm {H} (X)-\operatorname {I} (X;Y)\\[5pt]&=\log N-\operatorname {E} _{Y}\left[D_{\text{KL}}\left(P\left(X\mid Y\right)\parallel P_{U}(X)\right)\right]\end{aligned}}} is the number of bits which would have to be transmitted to identify X from N equally likely possibilities, less the relative entropy of the product distribution P U ( X ) P ( Y ) {\displaystyle P_{U}(X)P(Y)} from the true joint distribution P ( X , Y ) {\displaystyle P(X,Y)} — i.e. less the expected number of bits saved which would have had to be sent if the value of X were coded according to the uniform distribution P U ( X ) {\displaystyle P_{U}(X)} rather than the conditional distribution P ( X | Y ) {\displaystyle P(X|Y)} of X given Y . When we have a set of possible events, coming from the distribution p , we can encode them (with a lossless data compression ) using entropy encoding . This compresses the data by replacing each fixed-length input symbol with a corresponding unique, variable-length, prefix-free code (e.g.: the events (A, B, C) with probabilities p = (1/2, 1/4, 1/4) can be encoded as the bits (0, 10, 11)). If we know the distribution p in advance, we can devise an encoding that would be optimal (e.g.: using Huffman coding ). Meaning the messages we encode will have the shortest length on average (assuming the encoded events are sampled from p ), which will be equal to Shannon's Entropy of p (denoted as H ( p ) {\displaystyle \mathrm {H} (p)} ). However, if we use a different probability distribution ( q ) when creating the entropy encoding scheme, then a larger number of bits will be used (on average) to identify an event from a set of possibilities. This new (larger) number is measured by the cross entropy between p and q . The cross entropy between two probability distributions ( p and q ) measures the average number of bits needed to identify an event from a set of possibilities, if a coding scheme is used based on a given probability distribution q , rather than the "true" distribution p . The cross entropy for two distributions p and q over the same probability space is thus defined as follows. H ( p , q ) = E p ⁡ [ − log ⁡ q ] = H ( p ) + D KL ( p ∥ q ) . {\displaystyle \mathrm {H} (p,q)=\operatorname {E} _{p}[-\log q]=\mathrm {H} (p)+D_{\text{KL}}(p\parallel q).} For explicit derivation of this, see the Motivation section above. Under this scenario, relative entropies (kl-divergence) can be interpreted as the extra number of bits, on average, that are needed (beyond H ( p ) {\displaystyle \mathrm {H} (p)} ) for encoding the events because of using q for constructing the encoding scheme instead of p . In Bayesian statistics , relative entropy can be used as a measure of the information gain in moving from a prior distribution to a posterior distribution : p ( x ) → p ( x ∣ I ) {\displaystyle p(x)\to p(x\mid I)} . If some new fact Y = y {\displaystyle Y=y} is discovered, it can be used to update the posterior distribution for X from p ( x ∣ I ) {\displaystyle p(x\mid I)} to a new posterior distribution p ( x ∣ y , I ) {\displaystyle p(x\mid y,I)} using Bayes' theorem : p ( x ∣ y , I ) = p ( y ∣ x , I ) p ( x ∣ I ) p ( y ∣ I ) {\displaystyle p(x\mid y,I)={\frac {p(y\mid x,I)p(x\mid I)}{p(y\mid I)}}} This distribution has a new entropy : H ( p ( x ∣ y , I ) ) = − ∑ x p ( x ∣ y , I ) log ⁡ p ( x ∣ y , I ) , {\displaystyle \mathrm {H} {\big (}p(x\mid y,I){\big )}=-\sum _{x}p(x\mid y,I)\log p(x\mid y,I),} which may be less than or greater than the original entropy H ( p ( x ∣ I ) ) {\displaystyle \mathrm {H} (p(x\mid I))} . However, from the standpoint of the new probability distribution one can estimate that to have used the original code based on p ( x ∣ I ) {\displaystyle p(x\mid I)} instead of a new code based on p ( x ∣ y , I ) {\displaystyle p(x\mid y,I)} would have added an expected number of bits: D KL ( p ( x ∣ y , I ) ∥ p ( x ∣ I ) ) = ∑ x p ( x ∣ y , I ) log ⁡ p ( x ∣ y , I ) p ( x ∣ I ) {\displaystyle D_{\text{KL}}{\big (}p(x\mid y,I)\parallel p(x\mid I){\big )}=\sum _{x}p(x\mid y,I)\log {\frac {p(x\mid y,I)}{p(x\mid I)}}} to the message length. This therefore represents the amount of useful information, or information gain, about X , that has been learned by discovering Y = y {\displaystyle Y=y} . If a further piece of data, Y 2 = y 2 {\displaystyle Y_{2}=y_{2}} , subsequently comes in, the probability distribution for x can be updated further, to give a new best guess p ( x ∣ y 1 , y 2 , I ) {\displaystyle p(x\mid y_{1},y_{2},I)} . If one reinvestigates the information gain for using p ( x ∣ y 1 , I ) {\displaystyle p(x\mid y_{1},I)} rather than p ( x ∣ I ) {\displaystyle p(x\mid I)} , it turns out that it may be either greater or less than previously estimated: ∑ x p ( x ∣ y 1 , y 2 , I ) log ⁡ p ( x ∣ y 1 , y 2 , I ) p ( x ∣ I ) {\displaystyle \sum _{x}p(x\mid y_{1},y_{2},I)\log {\frac {p(x\mid y_{1},y_{2},I)}{p(x\mid I)}}} may be ≤ or > than ∑ x p ( x ∣ y 1 , I ) log ⁡ p ( x ∣ y 1 , I ) p ( x ∣ I ) {\textstyle \sum _{x}p(x\mid y_{1},I)\log {\frac {p(x\mid y_{1},I)}{p(x\mid I)}}} and so the combined information gain does not obey the triangle inequality: D KL ( p ( x ∣ y 1 , y 2 , I ) ∥ p ( x ∣ I ) ) {\displaystyle D_{\text{KL}}{\big (}p(x\mid y_{1},y_{2},I)\parallel p(x\mid I){\big )}} may be <, = or > than D KL ( p ( x ∣ y 1 , y 2 , I ) ∥ p ( x ∣ y 1 , I ) ) + D KL ( p ( x ∣ y 1 , I ) ∥ p ( x ∣ I ) ) {\displaystyle D_{\text{KL}}{\big (}p(x\mid y_{1},y_{2},I)\parallel p(x\mid y_{1},I){\big )}+D_{\text{KL}}{\big (}p(x\mid y_{1},I)\parallel p(x\mid I){\big )}} All one can say is that on average , averaging using p ( y 2 ∣ y 1 , x , I ) {\displaystyle p(y_{2}\mid y_{1},x,I)} , the two sides will average out. A common goal in Bayesian experimental design is to maximise the expected relative entropy between the prior and the posterior. [ 35 ] When posteriors are approximated to be Gaussian distributions, a design maximising the expected relative entropy is called Bayes d-optimal . Relative entropy D KL ( p ( x ∣ H 1 ) ∥ p ( x ∣ H 0 ) ) {\textstyle D_{\text{KL}}{\bigl (}p(x\mid H_{1})\parallel p(x\mid H_{0}){\bigr )}} can also be interpreted as the expected discrimination information for H 1 {\displaystyle H_{1}} over H 0 {\displaystyle H_{0}} : the mean information per sample for discriminating in favor of a hypothesis H 1 {\displaystyle H_{1}} against a hypothesis H 0 {\displaystyle H_{0}} , when hypothesis H 1 {\displaystyle H_{1}} is true. [ 36 ] Another name for this quantity, given to it by I. J. Good , is the expected weight of evidence for H 1 {\displaystyle H_{1}} over H 0 {\displaystyle H_{0}} to be expected from each sample. The expected weight of evidence for H 1 {\displaystyle H_{1}} over H 0 {\displaystyle H_{0}} is not the same as the information gain expected per sample about the probability distribution p ( H ) {\displaystyle p(H)} of the hypotheses, D KL ( p ( x ∣ H 1 ) ∥ p ( x ∣ H 0 ) ) ≠ I G = D KL ( p ( H ∣ x ) ∥ p ( H ∣ I ) ) . {\displaystyle D_{\text{KL}}(p(x\mid H_{1})\parallel p(x\mid H_{0}))\neq IG=D_{\text{KL}}(p(H\mid x)\parallel p(H\mid I)).} Either of the two quantities can be used as a utility function in Bayesian experimental design, to choose an optimal next question to investigate: but they will in general lead to rather different experimental strategies. On the entropy scale of information gain there is very little difference between near certainty and absolute certainty—coding according to a near certainty requires hardly any more bits than coding according to an absolute certainty. On the other hand, on the logit scale implied by weight of evidence, the difference between the two is enormous – infinite perhaps; this might reflect the difference between being almost sure (on a probabilistic level) that, say, the Riemann hypothesis is correct, compared to being certain that it is correct because one has a mathematical proof. These two different scales of loss function for uncertainty are both useful, according to how well each reflects the particular circumstances of the problem in question. The idea of relative entropy as discrimination information led Kullback to propose the Principle of Minimum Discrimination Information ( MDI ): given new facts, a new distribution f should be chosen which is as hard to discriminate from the original distribution f 0 {\displaystyle f_{0}} as possible; so that the new data produces as small an information gain D KL ( f ∥ f 0 ) {\displaystyle D_{\text{KL}}(f\parallel f_{0})} as possible. For example, if one had a prior distribution p ( x , a ) {\displaystyle p(x,a)} over x and a , and subsequently learnt the true distribution of a was u ( a ) {\displaystyle u(a)} , then the relative entropy between the new joint distribution for x and a , q ( x ∣ a ) u ( a ) {\displaystyle q(x\mid a)u(a)} , and the earlier prior distribution would be: D KL ( q ( x ∣ a ) u ( a ) ∥ p ( x , a ) ) = E u ( a ) ⁡ { D KL ( q ( x ∣ a ) ∥ p ( x ∣ a ) ) } + D KL ( u ( a ) ∥ p ( a ) ) , {\displaystyle D_{\text{KL}}(q(x\mid a)u(a)\parallel p(x,a))=\operatorname {E} _{u(a)}\left\{D_{\text{KL}}(q(x\mid a)\parallel p(x\mid a))\right\}+D_{\text{KL}}(u(a)\parallel p(a)),} i.e. the sum of the relative entropy of p ( a ) {\displaystyle p(a)} the prior distribution for a from the updated distribution u ( a ) {\displaystyle u(a)} , plus the expected value (using the probability distribution u ( a ) {\displaystyle u(a)} ) of the relative entropy of the prior conditional distribution p ( x ∣ a ) {\displaystyle p(x\mid a)} from the new conditional distribution q ( x ∣ a ) {\displaystyle q(x\mid a)} . (Note that often the later expected value is called the conditional relative entropy (or conditional Kullback–Leibler divergence ) and denoted by D KL ( q ( x ∣ a ) ∥ p ( x ∣ a ) ) {\displaystyle D_{\text{KL}}(q(x\mid a)\parallel p(x\mid a))} [ 3 ] [ 34 ] ) This is minimized if q ( x ∣ a ) = p ( x ∣ a ) {\displaystyle q(x\mid a)=p(x\mid a)} over the whole support of u ( a ) {\displaystyle u(a)} ; and we note that this result incorporates Bayes' theorem, if the new distribution u ( a ) {\displaystyle u(a)} is in fact a δ function representing certainty that a has one particular value. MDI can be seen as an extension of Laplace 's Principle of Insufficient Reason , and the Principle of Maximum Entropy of E.T. Jaynes . In particular, it is the natural extension of the principle of maximum entropy from discrete to continuous distributions, for which Shannon entropy ceases to be so useful (see differential entropy ), but the relative entropy continues to be just as relevant. In the engineering literature, MDI is sometimes called the Principle of Minimum Cross-Entropy (MCE) or Minxent for short. Minimising relative entropy from m to p with respect to m is equivalent to minimizing the cross-entropy of p and m , since H ( p , m ) = H ( p ) + D KL ( p ∥ m ) , {\displaystyle \mathrm {H} (p,m)=\mathrm {H} (p)+D_{\text{KL}}(p\parallel m),} which is appropriate if one is trying to choose an adequate approximation to p . However, this is just as often not the task one is trying to achieve. Instead, just as often it is m that is some fixed prior reference measure, and p that one is attempting to optimise by minimising D KL ( p ∥ m ) {\displaystyle D_{\text{KL}}(p\parallel m)} subject to some constraint. This has led to some ambiguity in the literature, with some authors attempting to resolve the inconsistency by redefining cross-entropy to be D KL ( p ∥ m ) {\displaystyle D_{\text{KL}}(p\parallel m)} , rather than H ( p , m ) {\displaystyle \mathrm {H} (p,m)} [ citation needed ] . Surprisals [ 37 ] add where probabilities multiply. The surprisal for an event of probability p is defined as s = − k ln ⁡ p {\displaystyle s=-k\ln p} . If k is { 1 , 1 / ln ⁡ 2 , 1.38 × 10 − 23 } {\displaystyle \left\{1,1/\ln 2,1.38\times 10^{-23}\right\}} then surprisal is in { {\displaystyle \{} nats, bits, or J / K } {\displaystyle J/K\}} so that, for instance, there are N bits of surprisal for landing all "heads" on a toss of N coins. Best-guess states (e.g. for atoms in a gas) are inferred by maximizing the average surprisal S ( entropy ) for a given set of control parameters (like pressure P or volume V ). This constrained entropy maximization , both classically [ 38 ] and quantum mechanically, [ 39 ] minimizes Gibbs availability in entropy units [ 40 ] A ≡ − k ln ⁡ Z {\displaystyle A\equiv -k\ln Z} where Z is a constrained multiplicity or partition function . When temperature T is fixed, free energy ( T × A {\displaystyle T\times A} ) is also minimized. Thus if T , V {\displaystyle T,V} and number of molecules N are constant, the Helmholtz free energy F ≡ U − T S {\displaystyle F\equiv U-TS} (where U is energy and S is entropy) is minimized as a system "equilibrates." If T and P are held constant (say during processes in your body), the Gibbs free energy G = U + P V − T S {\displaystyle G=U+PV-TS} is minimized instead. The change in free energy under these conditions is a measure of available work that might be done in the process. Thus available work for an ideal gas at constant temperature T o {\displaystyle T_{o}} and pressure P o {\displaystyle P_{o}} is W = Δ G = N k T o Θ ( V / V o ) {\displaystyle W=\Delta G=NkT_{o}\Theta (V/V_{o})} where V o = N k T o / P o {\displaystyle V_{o}=NkT_{o}/P_{o}} and Θ ( x ) = x − 1 − ln ⁡ x ≥ 0 {\displaystyle \Theta (x)=x-1-\ln x\geq 0} (see also Gibbs inequality ). More generally [ 41 ] the work available relative to some ambient is obtained by multiplying ambient temperature T o {\displaystyle T_{o}} by relative entropy or net surprisal Δ I ≥ 0 , {\displaystyle \Delta I\geq 0,} defined as the average value of k ln ⁡ ( p / p o ) {\displaystyle k\ln(p/p_{o})} where p o {\displaystyle p_{o}} is the probability of a given state under ambient conditions. For instance, the work available in equilibrating a monatomic ideal gas to ambient values of V o {\displaystyle V_{o}} and T o {\displaystyle T_{o}} is thus W = T o Δ I {\displaystyle W=T_{o}\Delta I} , where relative entropy Δ I = N k [ Θ ( V V o ) + 3 2 Θ ( T T o ) ] . {\displaystyle \Delta I=Nk\left[\Theta {\left({\frac {V}{V_{o}}}\right)}+{\frac {3}{2}}\Theta {\left({\frac {T}{T_{o}}}\right)}\right].} The resulting contours of constant relative entropy, shown at right for a mole of Argon at standard temperature and pressure, for example put limits on the conversion of hot to cold as in flame-powered air-conditioning or in the unpowered device to convert boiling-water to ice-water discussed here. [ 42 ] Thus relative entropy measures thermodynamic availability in bits. For density matrices P and Q on a Hilbert space , the quantum relative entropy from Q to P is defined to be D KL ( P ∥ Q ) = Tr ⁡ ( P ( log ⁡ P − log ⁡ Q ) ) . {\displaystyle D_{\text{KL}}(P\parallel Q)=\operatorname {Tr} (P(\log P-\log Q)).} In quantum information science the minimum of D KL ( P ∥ Q ) {\displaystyle D_{\text{KL}}(P\parallel Q)} over all separable states Q can also be used as a measure of entanglement in the state P . Just as relative entropy of "actual from ambient" measures thermodynamic availability, relative entropy of "reality from a model" is also useful even if the only clues we have about reality are some experimental measurements. In the former case relative entropy describes distance to equilibrium or (when multiplied by ambient temperature) the amount of available work , while in the latter case it tells you about surprises that reality has up its sleeve or, in other words, how much the model has yet to learn . Although this tool for evaluating models against systems that are accessible experimentally may be applied in any field, its application to selecting a statistical model via Akaike information criterion are particularly well described in papers [ 43 ] and a book [ 44 ] by Burnham and Anderson. In a nutshell the relative entropy of reality from a model may be estimated, to within a constant additive term, by a function of the deviations observed between data and the model's predictions (like the mean squared deviation ) . Estimates of such divergence for models that share the same additive term can in turn be used to select among models. When trying to fit parametrized models to data there are various estimators which attempt to minimize relative entropy, such as maximum likelihood and maximum spacing estimators. [ citation needed ] Kullback & Leibler (1951) also considered the symmetrized function: [ 6 ] D KL ( P ∥ Q ) + D KL ( Q ∥ P ) {\displaystyle D_{\text{KL}}(P\parallel Q)+D_{\text{KL}}(Q\parallel P)} which they referred to as the "divergence", though today the "KL divergence" refers to the asymmetric function (see § Etymology for the evolution of the term). This function is symmetric and nonnegative, and had already been defined and used by Harold Jeffreys in 1948; [ 7 ] it is accordingly called the Jeffreys divergence . This quantity has sometimes been used for feature selection in classification problems, where P and Q are the conditional pdfs of a feature under two different classes. In the Banking and Finance industries, this quantity is referred to as Population Stability Index ( PSI ), and is used to assess distributional shifts in model features through time. An alternative is given via the λ {\displaystyle \lambda } -divergence, D λ ( P ∥ Q ) = λ D KL ( P ∥ λ P + ( 1 − λ ) Q ) + ( 1 − λ ) D KL ( Q ∥ λ P + ( 1 − λ ) Q ) , {\displaystyle D_{\lambda }(P\parallel Q)=\lambda D_{\text{KL}}(P\parallel \lambda P+(1-\lambda )Q)+(1-\lambda )D_{\text{KL}}(Q\parallel \lambda P+(1-\lambda )Q),} which can be interpreted as the expected information gain about X from discovering which probability distribution X is drawn from, P or Q , if they currently have probabilities λ {\displaystyle \lambda } and 1 − λ {\displaystyle 1-\lambda } respectively. [ clarification needed ] [ citation needed ] The value λ = 0.5 {\displaystyle \lambda =0.5} gives the Jensen–Shannon divergence , defined by D JS = 1 2 D KL ( P ∥ M ) + 1 2 D KL ( Q ∥ M ) {\displaystyle D_{\text{JS}}={\tfrac {1}{2}}D_{\text{KL}}(P\parallel M)+{\tfrac {1}{2}}D_{\text{KL}}(Q\parallel M)} where M is the average of the two distributions, M = 1 2 ( P + Q ) . {\displaystyle M={\tfrac {1}{2}}\left(P+Q\right).} We can also interpret D JS {\displaystyle D_{\text{JS}}} as the capacity of a noisy information channel with two inputs giving the output distributions P and Q . The Jensen–Shannon divergence, like all f -divergences, is locally proportional to the Fisher information metric . It is similar to the Hellinger metric (in the sense that it induces the same affine connection on a statistical manifold ). Furthermore, the Jensen–Shannon divergence can be generalized using abstract statistical M-mixtures relying on an abstract mean M. [ 45 ] [ 46 ] There are many other important measures of probability distance . Some of these are particularly connected with relative entropy. For example: Other notable measures of distance include the Hellinger distance , histogram intersection , Chi-squared statistic , quadratic form distance , match distance , Kolmogorov–Smirnov distance , and earth mover's distance . [ 49 ] Just as absolute entropy serves as theoretical background for data compression , relative entropy serves as theoretical background for data differencing – the absolute entropy of a set of data in this sense being the data required to reconstruct it (minimum compressed size), while the relative entropy of a target set of data, given a source set of data, is the data required to reconstruct the target given the source (minimum size of a patch ).
https://en.wikipedia.org/wiki/Minimum_Discrimination_Information
In information theory , the principle of minimum Fisher information (MFI) is a variational principle which, when applied with the proper constraints needed to reproduce empirically known expectation values, determines the best probability distribution that characterizes the system. (See also Fisher information .) Information measures (IM) are the most important tools of information theory . They measure either the amount of positive information or of "missing" information an observer possesses with regards to any system of interest. The most famous IM is the so-called Shannon-entropy (1948), which determines how much additional information the observer still requires in order to have all the available knowledge regarding a given system S, when all he/she has is a probability density function (PDF) defined on appropriate elements of such system. This is then a "missing" information measure. The IM is a function of the PDF only. If the observer does not have such a PDF, but only a finite set of empirically determined mean values of the system, then a fundamental scientific principle called the Maximum Entropy one (MaxEnt) asserts that the "best" PDF is the one that, reproducing the known expectation values, maximizes otherwise Shannon's IM. Fisher's information (FIM), named after Ronald Fisher , (1925) is another kind of measure, in two respects, namely, 1) it reflects the amount of (positive) information of the observer, 2) it depends not only on the PD but also on its first derivatives, a property that makes it a local quantity ( Shannon's is instead a global one). The corresponding counterpart of MaxEnt is now the FIM-minimization, since Fisher's measure grows when Shannon's diminishes, and vice versa. The minimization here referred to (MFI) is an important theoretical tool in a manifold of disciplines, beginning with physics . In a sense it is clearly superior to MaxEnt because the later procedure yields always as the solution an exponential PD, while the MFI solution is a differential equation for the PD, which allows for greater flexibility and versatility. Much effort has been devoted to Fisher's information measure, shedding much light upon the manifold physical applications. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ excessive citations ] As a small sample, it can be shown that the whole field of thermodynamics (both equilibrium and non-equilibrium ) can be derived from the MFI approach. [ 16 ] Here FIM is specialized to the particular but important case of translation families, i.e., distribution functions whose form does not change under translational transformations. In this case, Fisher measure becomes shift-invariant. Such minimizing of Fisher's measure leads to a Schrödinger-like equation for the probability amplitude, where the ground state describes equilibrium physics and the excited states account for non-equilibrium situations. [ 17 ] More recently, Zipf's law has been shown to arise as the variational solution of the MFI when scale invariance is introduced in the measure, leading for the first time an explanation of this regularity from first principles . [ 18 ] It has been also shown that MFI can be used to formulate a thermodynamics based on scale invariance instead of translational invariance , allowing the definition of the scale-free ideal gas , the scale invariant equivalent of the ideal gas . [ 19 ]
https://en.wikipedia.org/wiki/Minimum_Fisher_information
The Minimum Information Required About a Glycomics Experiment ( MIRAGE ) initiative is part of the Minimum Information Standards and specifically applies to guidelines for reporting (describing metadata ) on a glycomics experiment. The initiative is supported by the Beilstein Institute for the Advancement of Chemical Sciences . [ 1 ] The MIRAGE project focuses on the development of publication guidelines for interaction and structural glycomics data as well as the development of data exchange formats. The project was launched in 2011 in Seattle and set off with the description of the aims of the MIRAGE project. [ 2 ] The MIRAGE Commission consists of three groups which tightly interact with each other. The advisory board consists of leading scientists in glycobiology, who, for example, critically review the outcomes of the working group and promote the reporting guidelines within the community. The working group seeks for external consultation and directly interacts with the glycomics community. The group members carry out defined subprojects (e.g. development and revision of guidelines) by focusing on specific research areas to fulfill the overall aims of the MIRAGE project. The co-ordination team links the subprojects from the working group together and passes the outcomes to the advisory board for review. The following reporting guidelines were developed and published: The MIRAGE reporting guidelines provide essential frameworks for subsequent projects related with the development of both software tools for the analysis of experimental glycan data and databases for the deposition of interaction analysis data (e.g. from glycan microarray experiments) and structural analysis data (e.g. from mass spectrometry and liquid chromatography experiments). As the guidelines include the definitions of the minimum information required for reporting glycomics experiments comprehensively, this information is incorporated in database structures, data acquisition forms and data exchange formats. The following databases comply with the MIRAGE guidelines: The following projects refer to the MIRAGE standards:
https://en.wikipedia.org/wiki/Minimum_Information_Required_About_a_Glycomics_Experiment
The minimum bactericidal concentration (MBC) is the lowest concentration of an antibacterial agent required to kill a particular bacterium . [ 1 ] It can be determined from broth dilution minimum inhibitory concentration (MIC) tests by subculturing to agar plates that do not contain the test agent. The MBC is identified by determining the lowest concentration of antibacterial agent that reduces the viability of the initial bacterial inoculum by ≥99.9%. [ 2 ] The MBC is complementary to the MIC; whereas the MIC test demonstrates the lowest level of antimicrobial agent that inhibits growth, the MBC demonstrates the lowest level of antimicrobial agent that results in microbial death. This means that even if a particular MIC shows inhibition, plating the bacteria onto agar might still result in organism proliferation because the antimicrobial did not cause death. Antibacterial agents are usually regarded as bactericidal if the MBC is no more than four times the MIC. [ 3 ] [ 4 ] Because the MBC test uses colony-forming units as a proxy measure of bacterial viability, it can be confounded by antibacterial agents which cause aggregation of bacterial cells. Examples of antibacterial agents which do this include flavonoids [ 4 ] and peptides. [ 5 ] [ 6 ]
https://en.wikipedia.org/wiki/Minimum_bactericidal_concentration
Minimum bias (MB) events are inelastic events selected by a high-energy experiment 's loose (minimum bias) trigger with as little bias as possible. MB events can include both non-diffractive and diffractive processes although the precise definition and relative contributions vary among experiments and analyses. Quite often the beam hadrons ooze through each other and fall apart without any hard collisions occurring in the event. MB event is not the same as the underlying event (UE), which consists of particles accompanying a hard scattering. The density of particles in the UE in jet events is found to be roughly a factor of two greater than that in MB in proton-proton collisions at the Tevatron and the LHC . [ 1 ]
https://en.wikipedia.org/wiki/Minimum_bias_event
In geometry , the minimum bounding box or smallest bounding box (also known as the minimum enclosing box or smallest enclosing box ) for a point set S in N dimensions is the box with the smallest measure ( area , volume , or hypervolume in higher dimensions) within which all the points lie. When other kinds of measure are used, the minimum box is usually called accordingly, e.g., "minimum-perimeter bounding box". The minimum bounding box of a point set is the same as the minimum bounding box of its convex hull , a fact which may be used heuristically to speed up computation. [ 1 ] In the two-dimensional case it is called the minimum bounding rectangle . The axis-aligned minimum bounding box (or AABB ) for a given point set is its minimum bounding box subject to the constraint that the edges of the box are parallel to the (Cartesian) coordinate axes. It is the Cartesian product of N intervals each of which is defined by the minimal and maximal value of the corresponding coordinate for the points in S . Axis-aligned minimal bounding boxes are used as an approximate location of an object in question and as a very simple descriptor of its shape. For example, in computational geometry and its applications when it is required to find intersections in the set of objects, the initial check is the intersections between their MBBs. Since it is usually a much less expensive operation than the check of the actual intersection (because it only requires comparisons of coordinates), it allows quickly excluding checks of the pairs that are far apart. The arbitrarily oriented minimum bounding box is the minimum bounding box, calculated subject to no constraints as to the orientation of the result. Minimum bounding box algorithms based on the rotating calipers method can be used to find the minimum-area or minimum-perimeter bounding box of a two-dimensional convex polygon in linear time, and of a three-dimensional point set in the time it takes to construct its convex hull followed by a linear-time computation. [ 1 ] A three-dimensional rotating calipers algorithm can find the minimum-volume arbitrarily-oriented bounding box of a three-dimensional point set in cubic time. [ 2 ] Matlab implementations of the latter as well as the optimal compromise between accuracy and CPU time are available. [ 3 ] In the case where an object has its own local coordinate system , it can be useful to store a bounding box relative to these axes, which requires no transformation as the object's own transformation changes. In digital image processing , the bounding box is merely the coordinates of the rectangular border that fully encloses a digital image when it is placed over a page, a canvas, a screen or other similar bidimensional background.
https://en.wikipedia.org/wiki/Minimum_bounding_box
MDMT is one of the design conditions for pressure vessels engineering calculations, design and manufacturing according to the ASME Boilers and Pressure Vessels Code. Each pressure vessel that conforms to the ASME code has its own MDMT, and this temperature is stamped on the vessel nameplate. The precise definition can sometimes be a little elaborate, but in simple terms the MDMT is a temperature arbitrarily selected by the user of type of fluid and the temperature range the vessel is going to handle. The so-called arbitrary MDMT must be lower than or equal to the CET (which is an environmental or "process" property, see below) and must be higher than or equal to the (MDMT) M (which is a material property). Critical exposure temperature (CET) is the lowest anticipated temperature to which the vessel will be subjected, taking into consideration lowest operating temperature , operational upsets, autorefrigeration, atmospheric temperature, and any other sources of cooling. In some cases it may be the lowest temperature at which significant stresses will occur and not the lowest possible temperature. (MDMT) M is the lowest temperature permitted according to the metallurgy of the vessel fabrication materials and the thickness of the vessel component, that is, according to the low temperature embrittlement range and the charpy impact test requirements per temperature and thickness, for each one of the vessel's components. This article about a mechanical engineering topic is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Minimum_design_metal_temperature
In Game theory , the minimum effort game or weakest link game is a game in which each person decides how much effort to put in and is rewarded based on the least amount of effort anyone puts in. [ 1 ] It is assumed that the reward per unit of effort is greater than the cost per unit effort, otherwise there would be no reason to put in effort. Source: [ 2 ] If there are n {\displaystyle n} players, the set of effort levels is A = { 1 , . . . , K } {\displaystyle A=\{1,...,K\}} , it costs each player c {\displaystyle c} dollars to put in one unit of effort, and each player is rewarded b {\displaystyle b} dollars for each unit of effort the laziest person puts in, then there are K {\displaystyle K} pure-strategy Nash equilibria , one for each k ∈ A {\displaystyle k\in A} , with each player putting in the same amount of effort k {\displaystyle k} , because putting more effort costs more money without extra reward, and because putting less effort reduces the reward earned. There are K ( K − 1 ) 2 {\displaystyle {\frac {K(K-1)}{2}}} non pure Nash equilibria, given as follows: each player chooses two effort levels k < l {\displaystyle k<l} and puts in k {\displaystyle k} units of effort with probability ( c b ) 1 n − 1 {\displaystyle \left({\frac {c}{b}}\right)^{\frac {1}{n-1}}} and l {\displaystyle l} units of effort with probability 1 − ( c b ) 1 n − 1 {\displaystyle 1-\left({\frac {c}{b}}\right)^{\frac {1}{n-1}}} . The amount of effort players put in depends on the amount of effort they think other players will put in. [ 3 ] In addition, some players will put more effort than expected in an attempt to get others to put in more effort.
https://en.wikipedia.org/wiki/Minimum_effort_game
Minimum evolution is a distance method employed in phylogenetics modeling. It shares with maximum parsimony the aspect of searching for the phylogeny that has the shortest total sum of branch lengths. [ 1 ] [ 2 ] The theoretical foundations of the minimum evolution (ME) criterion lay in the seminal works of both Kidd and Sgaramella-Zonta (1971) [ 3 ] and Rzhetsky and Nei (1993). [ 4 ] In these frameworks, the molecular sequences from taxa are replaced by a set of measures of their dissimilarity (i.e., the so-called "evolutionary distances") and a fundamental result states that if such distances were unbiased estimates of the true evolutionary distances from taxa (i.e., the distances that one would obtain if all the molecular data from taxa were available), then the true phylogeny of taxa would have an expected length shorter than any other possible phylogeny T compatible with those distances. It is worth noting here a subtle difference between the maximum-parsimony criterion and the ME criterion: while maximum-parsimony is based on an abductive heuristic, i.e., the plausibility of the simplest evolutionary hypothesis of taxa with respect to the more complex ones, the ME criterion is based on Kidd and Sgaramella-Zonta's conjectures that were proven true 22 years later by Rzhetsky and Nei. [ 4 ] These mathematical results set the ME criterion free from the Occam's razor principle and confer it a solid theoretical and quantitative basis. Similarly to ME, maximum parsimony becomes an NP-hard problem when trying to find the optimal tree [ 5 ] (that is, the one with the least total character-state changes). This is why heuristics are often utilized in order to select a tree, though this does not guarantee the tree will be an optimal selection for the input dataset. This method is often used when very similar sequences are analyzed, as part of the process is locating informative sites in the sequences where a notable number of substitutions can be found. [ 6 ] Maximum-parsimony criterion, which uses Hamming distance branch lengths, was shown to be statistically inconsistent in 1978. This led to an interest in statistically consistent alternatives such as ME. [ 7 ] Neighbor joining may be viewed as a greedy heuristic for the balanced minimum evolution (BME) criterion. Saito and Nei's 1987 NJ algorithm far predates the BME criterion of 2000. For two decades, researchers used NJ without a firm theoretical basis for why it works. [ 8 ] While neighbor joining shares the same underlying principle of prioritizing minimal evolutionary steps, it differs in that it is a distance method as opposed to maximum parsimony, which is a character-based method. Distance methods like neighbor joining are often simpler to implement and more efficient, which has led to its popularity for analyzing especially large datasets where computational speed is critical. Neighbor joining is a relatively fast phylogenetic tree-building method, though its worst-case time complexity can still be O (N 3 ) without utilizing heuristic implementations to improve on this. [ 9 ] It also considers varying rates of evolution across branches, which many other methods do not account for. Neighbor joining also is a rather consistent method in that an input distance matrix with little to no errors will often provide an output tree with minimal inaccuracy. However, using simple distance values rather than full sequence information like in maximum parsimony does lend itself to a loss of information due to the simplification of the problem. [ 10 ] Maximum likelihood contrasts itself with Minimum Evolution in the sense of Maximum likelihood is a combination of the testing of the most likely tree to result from the data. However, due to the nature of the mathematics involved it is less accurate with smaller datasets but becomes far less biased as the sample size increases, this is due to due to the error rate being 1/log(n). Minimal evolution is similar but it is less accurate with very large datasets. It is similarly powerful but overall much more complicated compared to UPGMA and other options. [ 11 ] UPGMA is a clustering method. It builds a collection of clusters that are then further clustered until the maximum potential cluster is obtained.  This is then worked backwards to determine the relation of the groups. It specifically uses an arithmetic mean enabling a more stable clustering. Overall while it is less powerful compared to any of the other listed comparisons it is far simpler and less complex to create. Minimal Evolution is overall more powerful but also more complicated to set up, and is also NP hard. [ 12 ] The ME criterion is known to be statistically consistent whenever the branch lengths are estimated via the Ordinary Least-Squares (OLS) or via linear programming . [ 4 ] [ 13 ] [ 14 ] However, as observed in Rzhetsky & Nei's article, the phylogeny having the minimum length under the OLS branch length estimation model may be characterized, in some circumstance, by negative branch lengths, which unfortunately are empty of biological meaning. [ 4 ] To solve this drawback, Pauplin [ 15 ] proposed to replace OLS with a new particular branch length estimation model, known as balanced basic evolution (BME). Richard Desper and Olivier Gascuel [ 16 ] showed that the BME branch length estimation model ensures the general statistical consistency of the minimum length phylogeny as well as the non-negativity of its branch lengths, whenever the estimated evolutionary distances from taxa satisfy the triangle inequality. Le Sy Vinh and Arndt von Haeseler [ 17 ] have shown, by means of massive and systematic simulation experiments, that the accuracy of the ME criterion under the BME branch length estimation model is by far the highest in distance methods and not inferior to those of alternative criteria based e.g., on Maximum Likelihood or Bayesian Inference. Moreover, as shown by Daniele Catanzaro, Martin Frohn and Raffaele Pesenti , [ 18 ] the minimum length phylogeny under the BME branch length estimation model can be interpreted as the (Pareto optimal) consensus tree between concurrent minimum entropy processes encoded by a forest of n phylogenies rooted on the n analyzed taxa. This particular information theory-based interpretation is conjectured to be shared by all distance methods in phylogenetics. Francois Denis and Olivier Gascuel [ 19 ] proved that the Minimum Evolution principle is not consistent in weighted least squares and generalized least squares. They showed that there was an algorithm that could be used in OLS models where all weights are equal called EDGE_LENGTHS. In this algorithm the lengths of two edges, 1u and 2u can be computed without using distances δij(i,j≠1,2). This property does not hold in WLS models or in the GLS models. Without this property the ME principle is not consistent in the WLS and GLS models. The "minimum evolution problem" (MEP), in which a minimum-summed-length phylogeny is derived from a set of sequences under the ME criterion, is said to be NP-hard . [ 20 ] [ 21 ] The "balanced minimum evolution problem" (BMEP), which uses the newer BME criterion, is APX-hard . [ 20 ] A number of exact algorithms solving BMEP have been described. [ 22 ] [ 23 ] [ 24 ] [ 25 ] The best known exact algorithm [ 26 ] remains impractical for more than a dozen taxa, even with multiprocessing. [ 20 ] There is only one approximation algorithm with proven error bounds, published in 2012. In practical use, BMEP is overwhelmingly implemented by heuristic search . The basic, aforementioned neighbor-joining algorithm implements a greedy version of BME. [ 27 ] FastME , the "state-of-the-art", [ 20 ] starts with a rough tree then improves it using a set of topological moves such as Nearest Neighbor Interchanges (NNI). Compared to NJ, it is about as fast and more accurate. [ 28 ] FastME operates on the Balanced Minimum Evolution principle, which calculates tree length using a weighted linear function of all pairwise distances. The BME score for a given topology is expressed as: where d i j {\displaystyle d_{ij}} represents the evolutionary distance between taxa i {\displaystyle i} and j {\displaystyle j} , and w i j {\displaystyle w_{ij}} is a topology-dependent weight that balances each pair’s contribution. This approach enables more accurate reconstructions than greedy algorithms like NJ. The algorithm improves tree topology through local rearrangements, primarily Subtree Prune and Regraft (SPR) and NNI operations. At each step, it checks if a rearranged tree has a lower BME score. If so, the change is retained. This iterative refinement enables FastME to converge toward near-optimal solutions efficiently, even for large datasets. Simplified pseudocode of FastME: Simulations reported by Desper and Gascuel demonstrate that FastME consistently outperforms NJ in terms of topological accuracy, particularly when evolutionary rates vary or distances deviate from strict additivity. It has also been successfully used on datasets with over 1,000 taxa. [ 29 ] Like most distance-based methods, BME assumes that the input distances are additive. When this assumption does not hold—due to noise, unequal rates, or other violations—the resulting trees may still be close to optimal, but accuracy can be affected. In addition to FastME, metaheuristic methods such as genetic algorithms and simulated annealing have also been used to explore tree topologies under the minimum evolution criterion, particularly for very large datasets where traditional heuristics may struggle. [ 30 ]
https://en.wikipedia.org/wiki/Minimum_evolution