text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Modal analysis is the study of the dynamic properties of systems in the frequency domain . It consists of mechanically exciting a studied component in such a way to target the modeshapes of the structure, and recording the vibration data with a network of sensors. Examples would include measuring the vibration of a car's body when it is attached to a shaker , or the noise pattern in a room when excited by a loudspeaker.
Modern day experimental modal analysis systems are composed of 1) sensors such as transducers (typically accelerometers , load cells ), or non contact via a Laser vibrometer , or stereophotogrammetric cameras 2) data acquisition system and an analog-to-digital converter front end (to digitize analog instrumentation signals) and 3) host PC ( personal computer ) to view the data and analyze it.
Classically this was done with a SIMO (single-input, multiple-output) approach, that is, one excitation point, and then the response is measured at many other points. In the past a hammer survey, using a fixed accelerometer and a roving hammer as excitation, gave a MISO (multiple-input, single-output) analysis, which is mathematically identical to SIMO, due to the principle of reciprocity . In recent years MIMO (multi-input, multiple-output) have become more practical, where partial coherence analysis identifies which part of the response comes from which excitation source. Using multiple shakers leads to a uniform distribution of the energy over the entire structure and a better coherence in the measurement. A single shaker may not effectively excite all the modes of a structure. [ 1 ]
Typical excitation signals can be classed as impulse , broadband , swept sine , chirp, and possibly others. Each has its own advantages and disadvantages.
The analysis of the signals typically relies on Fourier analysis . The resulting transfer function will show one or more resonances , whose characteristic mass , frequency and damping ratio can be estimated from the measurements.
The animated display of the mode shape is very useful to NVH (noise, vibration, and harshness) engineers.
The results can also be used to correlate with finite element analysis normal mode solutions.
In structural engineering , modal analysis uses the overall mass and stiffness of a structure to find the various periods at which it will naturally resonate. These periods of vibration are very important to note in earthquake engineering , as it is imperative that a building's natural frequency does not match the frequency of expected earthquakes in the region in which the building is to be constructed. If a structure's natural frequency matches an earthquake's frequency [ citation needed ] , the structure may continue to resonate and experience structural damage. Modal analysis is also important in structures such as bridges where the engineer should attempt to keep the natural frequencies away from the frequencies of people walking on the bridge. This may not be possible and for this reasons when groups of people are to walk along a bridge, for example a group of soldiers, the recommendation is that they break their step to avoid possibly significant excitation frequencies. Other natural excitation frequencies may exist and may excite a bridge's natural modes. Engineers tend to learn from such examples (at least in the short term) and more modern suspension bridges take account of the potential influence of wind through the shape of the deck, which might be designed in aerodynamic terms to pull the deck down against the support of the structure rather than allow it to lift. Other aerodynamic loading issues are dealt with by minimizing the area of the structure projected to the oncoming wind and to reduce wind generated oscillations of, for example, the hangers in suspension bridges.
Although modal analysis is usually carried out by computers , it is possible to hand-calculate the period of vibration of any high-rise building through idealization as a fixed-ended cantilever with lumped masses.
The basic idea of a modal analysis in electrodynamics is the same as in mechanics. The application is to determine which electromagnetic wave modes can stand or propagate within conducting enclosures such as waveguides or resonators .
Once a set of modes has been calculated for a system, the response to any kind of excitation can be calculated as a superposition of modes. This means that the response is the sum of the different mode shapes each one vibrating at its frequency. The weighting coefficients of this sum depend on the initial conditions and on the input signal.
If the response is measured at point B in direction x (for example), for an excitation at point A in direction y, then the transfer function (crudely Bx/Ay in the frequency domain) is identical to that which is obtained when the response at Ay is measured when excited at Bx. That is Bx/Ay=Ay/Bx. Again this assumes (and is a good test for) linearity. (Furthermore, this assumes restricted types of damping and restricted types of active feedback.)
Identification methods are the mathematical backbone of modal analysis. They allow, through linear algebra , specifically through least square methods to fit large amounts of data to find the modal constants (modal mass, modal stiffness modal damping) of the system. The methods are divided on the basis of the kind of system they aim to study in SDOF (single degree of freedom) methods and MDOF (multiple degree of freedom systems) methods and on the basis of the domain in which the data fitting takes place in time domain methods and frequency domain methods. | https://en.wikipedia.org/wiki/Modal_analysis |
In modal logic , modal collapse is the condition in which every true statement is necessarily true , and vice versa; that is to say, there are no contingent truths , or to put it another way, that "everything exists necessarily" [ 1 ] [ 2 ] (and likewise if something does not exist, it cannot exist). In the notation of modal logic, this can be written as ϕ ↔ ◻ ϕ {\displaystyle \phi \leftrightarrow \Box \phi } .
In the context of philosophy, the term is commonly used in critiques of ontological arguments for the existence of God and the principle of divine simplicity . [ 1 ] [ 3 ] For example, Gödel's ontological proof contains ϕ → ◻ ϕ {\displaystyle \phi \rightarrow \Box \phi } as a theorem, which combined with the axioms of system S5 leads to modal collapse. [ 4 ] Since some regard divine freedom as essential to the nature of God, and modal collapse as negating the concept of free will , this then leads to the breakdown of Gödel's argument. [ 5 ]
This article about religious studies is a stub . You can help Wikipedia by expanding it .
This mathematical logic -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Modal_collapse |
Modal dispersion is a distortion mechanism occurring in multimode fibers and other waveguides , in which the signal is spread in time because the propagation velocity of the optical signal is not the same for all modes . Other names for this phenomenon include multimode distortion , multimode dispersion , modal distortion , intermodal distortion , intermodal dispersion , and intermodal delay distortion . [ 1 ] [ 2 ]
In the ray optics analogy, modal dispersion in a step-index optical fiber may be compared to multipath propagation of a radio signal . Rays of light enter the fiber with different angles to the fiber axis , up to the fiber's acceptance angle . Rays that enter with a shallower angle travel by a more direct path, and arrive sooner than rays that enter at a steeper angle (which reflect many more times off the boundaries of the core as they travel the length of the fiber). The arrival of different components of the signal at different times distorts the shape. [ 3 ]
Modal dispersion limits the bandwidth of multimode fibers. For example, a typical step-index fiber with a 50 μm core would be limited to approximately 20 MHz for a one kilometer length, in other words, a bandwidth of 20 MHz·km. Modal dispersion may be considerably reduced, but never completely eliminated, by the use of a core having a graded refractive index profile. However, multimode graded-index fibers having bandwidths exceeding 3.5 GHz·km at 850 nm are now commonly manufactured for use in 10 Gbit/s data links.
Modal dispersion should not be confused with chromatic dispersion , a distortion that results due to the differences in propagation velocity of different wavelengths of light. Modal dispersion occurs even with an ideal, monochromatic light source.
A special case of modal dispersion is polarization mode dispersion (PMD), a fiber dispersion phenomenon usually associated with single-mode fibers. PMD results when two modes that normally travel at the same speed due to fiber core geometric and stress symmetry (for example, two orthogonal polarizations in a waveguide of circular or square cross-section), travel at different speeds due to random imperfections that break the symmetry.
In multimode optical fiber with many wavelengths propagating, it is sometimes hard to identify the dispersed wavelength out of all the wavelengths that are present, if there is not yet a service degradation issue. One can compare the present optical power of each wavelength to the designed values and look for differences. After that, the optical fiber is tested end to end. If no loss is found, then most probably there is dispersion with that particular wavelength. Normally engineers start testing the fiber section by section until they reach the affected section; all wavelengths are tested and the affected wavelength produces a loss at the far end of the fiber. One can easily calculate how much of the fiber is affected and replace that part of fiber with a new one. Replacement of optical fiber is only required when there is an intense dispersion and service is being affected; otherwise various methods can be used to compensate for the dispersion. | https://en.wikipedia.org/wiki/Modal_dispersion |
The modal fallacy or modal scope fallacy is a type of formal fallacy that occurs in modal logic . It is the fallacy of placing a proposition in the wrong modal scope, [ 1 ] most commonly confusing the scope of what is necessarily true . A statement is considered necessarily true if and only if it is impossible for the statement to be untrue and that there is no situation that would cause the statement to be false. Some philosophers further argue that a necessarily true statement must be true in all possible worlds .
In modal logic, a proposition P {\displaystyle P} can be necessarily true or false (denoted ◻ P {\displaystyle \Box P} and ◻ ¬ P {\displaystyle \Box \lnot P} , respectively), meaning that it is necessary that it is true or false; or it could be possibly true or false (denoted ⋄ P {\displaystyle \diamond P} and ⋄ ¬ P {\displaystyle \diamond \lnot P} ), meaning that it is true or false, but it is not logically necessary that it is so: its truth or falseness is contingent . The modal fallacy occurs when there is a confusion of the distinction between the two.
A fallacy of necessity is an informal fallacy in the logic of a syllogism whereby a degree of unwarranted necessity is placed in the conclusion.
In modal logic, there is an important distinction between what is logically necessary to be true and what is true but not logically necessary to be so. One common form is replacing p → q {\displaystyle p\rightarrow q} with p → ◻ q {\displaystyle p\rightarrow \Box q} . In the first statement, q {\displaystyle q} is true given p {\displaystyle p} but is not logically necessary to be so.
The condition a) appears to be a tautology and therefore true. The condition b) is a statement of fact about John which makes him subject to a); that is, b) declares John a bachelor, and a) states that all bachelors are unmarried.
Because c) presumes b) will always be the case, it is a fallacy of necessity. John, of course, is always free to stop being a bachelor, simply by getting married; if he does so, b) is no longer true and thus not subject to the tautology a). In this case, c) has unwarranted necessity by assuming, incorrectly, that John cannot stop being a bachelor. Formally speaking, this type of argument equivocates between the de dicto necessity of a) and the de re necessity of c). The argument is only valid if both a) and c) are construed de re . This, however, would undermine the argument, as a) is only a tautology de dicto – indeed, interpreted de re , it is false. [ 2 ] Using the formal symbolism in modal logic , the de dicto expression ◻ ( B x → ¬ M x ) {\displaystyle \Box (Bx\rightarrow \neg Mx)} is a tautology, while the de re expression B x → ◻ ¬ M x {\displaystyle Bx\rightarrow \Box \neg Mx} is false.
An example:
Why is this false?
The conclusion is false, since, even though Mickey Mouse is over 35 years old, there is no logical necessity for him to be. Even though it is certainly true in this world, a possible world can exist in which Mickey Mouse is not yet 35 years old. If instead of adding a stipulation of necessity, the argument just concluded that Mickey Mouse is 35 or older, it would be valid.
Norman Swartz gave the following example of how the modal fallacy can lead one to conclude that the future is already set, regardless of one's decisions; this is based on the "sea battle" example used by Aristotle to discuss the problem of future contingents in his On Interpretation : [ 3 ]
Two admirals, A and B, are preparing their navies for a sea battle tomorrow. The battle will be fought until one side is victorious. But the 'laws' of the excluded middle (no third truth-value) and of non-contradiction (not both truth-values), mandate that one of the propositions, 'A wins' and 'B wins', is true (always has been and ever will be) and the other is false (always has been and ever will be). Suppose 'A wins' is today true. Then whatever A does (or fails to do) today will make no difference; similarly, whatever B does (or fails to do) today will make no difference: the outcome is already settled. Or again, suppose 'A wins' is today false. Then no matter what A does today (or fails to do), it will make no difference; similarly, no matter what B does (or fails to do), it will make no difference: the outcome is already settled. Thus, if propositions bear their truth-values timelessly (or unchangingly and eternally), then planning, or as Aristotle put it 'taking care', is illusory in its efficacy. The future will be what it will be, irrespective of our planning, intentions, etc.
Suppose that the statement "A wins" is given by A {\displaystyle A} and "B wins" is given by B {\displaystyle B} . It is true here that only one of the statements "A wins" or "B wins" must be true. In other words, only one of ⋄ A {\displaystyle \diamond A} or ⋄ B {\displaystyle \diamond B} is true. In logic syntax, this is equivalent to
A ∨ B {\displaystyle A\lor B} (either A {\displaystyle A} or B {\displaystyle B} is true)
¬ ⋄ ( A ∧ B ) {\displaystyle \lnot \diamond (A\land B)} (it is not possible that A {\displaystyle A} and B {\displaystyle B} are both true at the same time)
The fallacy here occurs because one assumes that ⋄ A {\displaystyle \diamond A} and ⋄ B {\displaystyle \diamond B} implies ◻ A {\displaystyle \Box A} and ◻ B {\displaystyle \Box B} . Thus, one believes that, since one of both events is logically necessarily true, no action by either can change the outcome.
Swartz also argued that the argument from free will suffers from the modal fallacy. [ 4 ] | https://en.wikipedia.org/wiki/Modal_fallacy |
Modal fictionalism is a term used in philosophy , and more specifically in the metaphysics of modality , to describe the position that holds that modality can be analysed in terms of a fiction about possible worlds . The theory comes in two versions: Strong and Timid. Both positions were first exposed by Gideon Rosen starting from 1990. [ 1 ]
Modal fictionalism is a philosophical perspective that centers on the assertion that possible worlds are fictional entities. This perspective seeks to explain our apparent commitment to possible worlds in a manner akin to our engagement with other fictional constructs, such as ideal gases or frictionless surfaces. One of the pioneering works in this field was presented by Rosen in 1990, wherein he and other scholars formulated modal fictionalism as a theory equating talk of possible worlds with discussions about paradigmatically fictional objects, such as Sherlock Holmes. For example, statements like "There is a (non-actual) possible world at which there are blue swans" are understood through an analogy with "There is a brilliant detective at 221b Baker Street," as proposed by Rosen. [ 2 ]
Modal fictionalism involves at least a partial account of how paradigmatically fictional claims are to be treated, asserting that these claims are, in a literal and strict sense, false. According to modal fictionalists, there are no merely possible worlds, situations, outcomes, or objects. In strict terms, there is no sculpture created on a particular morning, even though the potential for its creation existed. Similarly, when a coin flip results in heads, there is no outcome in which it lands tails, strictly speaking.
However, within the context of the modal fiction or the fiction of possible worlds, there exists a (merely possible) sculpture that could have been created that morning and an (unactualized) outcome where the coin lands tails. While discussions about merely possible worlds and objects are generally literally false, more elaborate discussions about what is true according to the fiction of possible worlds are considered literally true.
Some proponents of modal fictionalism, such as Hinckfuss (1993), suggest that discussions about possible worlds should be governed by implicit presuppositions known to be false. This approach ensures that statements in the language of possible worlds do not necessitate a belief in their actual existence but rather commit one to more economical propositions, such as "if there were possible worlds of a certain kind, then..." or "given the presupposition that there are possible worlds...". Alternatively, other accounts of how talk about possible worlds functions may be proposed. For instance, Nolt (1986) proposes treating typical "possibilistic discourse" as a form of make-believe, although the specific theory of make-believe is not explicitly defined. Notably, Stephen Yablo (Yablo 1996) employs Walton's theory of make-believe in his modal fictionalism, which he also refers to as figuralism.
One of the primary advantages of adopting a fictional approach to possible worlds is the ability to utilize the language of possible worlds without committing to their literal existence. This approach is particularly appealing when considering merely possible objects, such as blue swans or dragons, which are often characterized by their non-actual existence.
Central to modal fictionalism are biconditionals that establish connections between truths about necessity and possibility and the contents of the modal fiction. These biconditionals, exemplified by schemas like "Possibly P iff according to the fiction of possible worlds, P is true at some possible world" and "Necessarily P iff according to the fiction of possible worlds, P is true at all possible worlds," are crucial for understanding the relationship between modal claims and the modal fiction. While these biconditionals can inter-define necessity and possibility, their precise workings may vary among different modal fictionalists. [ 2 ]
In conclusion, modal fictionalism offers a unique perspective on the nature of possible worlds, allowing for the exploration of these concepts while avoiding the ontological commitment to their actual existence. The diversity of approaches within modal fictionalism highlights the rich philosophical discussions and debates surrounding this intriguing viewpoint. [ 3 ] [ 2 ]
According to strong fictionalism about possible worlds (another name for strong modal fictionalism), the following bi-conditionals are necessary and specify the truth-conditions for certain cases of modal claims:
Recent supporters of this view added further specifications of these bi-conditionals to counter certain objections. In the case of claims of possibility, the revised bi-conditional is thus spelled out: (1.1) it is possible that P iff At this universe, presently, the translation of P into the language of a fiction F holds according to F. [ 4 ]
According to a timid version of fictionalism about possible worlds, our possible worlds can be properly understood as involving reference to a fiction, but the aforementioned bi-conditionals should not be taken as an analysis of certain cases of modality.
Modal fictionalism has encountered various objections and concerns on more conceptual and philosophical grounds. These concerns are not uniformly applicable to all forms of modal fictionalism and often target specific versions of the doctrine.
One significant concern relates to the artificial nature of fictions. Fictional stories are human creations, typically authored by individuals who exercise a significant degree of control over the content and truth within those narratives. When considering modal fictionalism, it becomes evident that not just any narrative about possible worlds can serve as the modal fiction, especially if it is meant to provide heuristic and explanatory advantages similar to realist theories of possible worlds. The worry here is that talk of possible worlds may not be as flexible as typical fictional narratives, as the choice of which story about possible worlds should be the modal fiction might not be entirely within our discretion.
Modal fictionalists can argue that the constraints of the fiction are necessary, just as specific constraints govern the creation of fictional stories about other topics. However, defining these constraints can be challenging, and determining why they are appropriate is no simple task. Even if such constraints are established, there may still be a degree of artificiality in the choice of details left undetermined by these constraints, though this is unlikely to be a fatal problem.
A more specific concern arises regarding the contingency of whether a modal fiction exists at all. If sentient beings had never existed, no stories about possible worlds would have been told. Even if modal fiction is viewed as a Platonic entity (such as a collection of propositions), it might not have been considered fiction if it had never been expressed by storytellers. This concern is particularly relevant when modal truth is thought to depend on the contents of the fiction, as the possibility of blue swans, for instance, should not be contingent upon whether stories have been told. Various responses to this concern have been proposed in the literature. [ 5 ]
Fictions, including modal fictions, often exhibit incompleteness by being silent on certain issues. For example, the Sherlock Holmes stories do not specify the exact population of India or the number of hairs on Dr. Watson's head. Similarly, the modal fiction might also exhibit incompleteness, leaving some propositions without determinate truth values within the fiction.
This incompleteness can pose challenges. For instance, there is the "incompleteness problem," where a fictionalist may remain silent on certain modal issues, not because they believe there is no answer, but because the fiction itself is silent on those issues. This can lead to difficulties in determining the truth or falsity of related modal claims. Different solutions have been proposed, including treating modal claims as indeterminate when the fiction is silent on corresponding questions about possible worlds.
Another concern is that a modal fiction must represent a vast amount of information about possible worlds, as there are infinitely many claims about possible worlds necessary to correspond to all modal claims. However, the finite resources of description can limit the extent to which these propositions can be explicitly stated. While generalizations about possible worlds can help, strong modal fictionalists aiming to reductively analyze modality in terms of the fiction face challenges in representing the implicit content of the fiction without relying on modal notions like implication. [ 6 ]
A critical aspect of modal fictionalism is the specification of the fiction of possible worlds to be used. Selecting one from many potential candidate stories and justifying this choice is essential, yet often overlooked. While some modal fictionalists offer justifications, many do not.
Timid modal fictionalism provides a straightforward answer to this question by relying on independently obtaining modal truths. Strong modal fictionalists, however, must ensure that the content of the fiction aligns with the modal claims they wish to make. Nonetheless, this doesn't help them determine the content of the fiction itself. Specifying this content without relying on modal notions like implication is a challenge faced by strong modal fictionalists.
Constraints on the choice of fiction may be drawn from various sources, such as conformity with pre-theoretic modal judgments, inclusion of literal truths about our actual world, and considerations of our imaginative practices when forming modal beliefs. Even with these constraints, there might still be multiple equally suitable fictions, raising questions about how to handle differing fictional choices and their implications for modal claims.
Modal fictionalism relies on the "According to PW..." operator as a central theoretical tool. This operator poses a challenge, as it appears to be a modal notion. For modal fictionalists interested in analyzing modality in terms of their fiction, this operator should not be analyzed in terms of standard modal devices or possible worlds. Whether it should be considered a primitive or analyzed further remains a matter of debate and is a concern for those seeking a reductive analysis of modality.
Some philosophers [ who? ] have argued that modal fictionalism may not provide all the benefits of standard possible worlds semantics for modal discourse. John Divers, in particular, has raised objections to this aspect of modal fictionalism, questioning whether it can fully capture the advantages of traditional possible worlds semantics. [ citation needed ]
Modal fictionalism has also faced additional objections and concerns:
These concerns are part of ongoing debates surrounding modal fictionalism and its compatibility with various philosophical positions. [ 3 ] [ 7 ] | https://en.wikipedia.org/wiki/Modal_fictionalism |
Modal logic is a kind of logic used to represent statements about necessity and possibility . In philosophy and related fields
it is used as a tool for understanding concepts such as knowledge , obligation , and causation . For instance, in epistemic modal logic , the formula ◻ P {\displaystyle \Box P} can be used to represent the statement that P {\displaystyle P} is known. In deontic modal logic , that same formula can represent that P {\displaystyle P} is a moral obligation. Modal logic considers the inferences that modal statements give rise to. For instance, most epistemic modal logics treat the formula ◻ P → P {\displaystyle \Box P\rightarrow P} as a tautology , representing the principle that only true statements can count as knowledge. However, this formula is not a tautology in deontic modal logic, since what ought to be true can be false.
Modal logics are formal systems that include unary operators such as ◊ {\displaystyle \Diamond } and ◻ {\displaystyle \Box } , representing possibility and necessity respectively. For instance the modal formula ◊ P {\displaystyle \Diamond P} can be read as "possibly P {\displaystyle P} " while ◻ P {\displaystyle \Box P} can be read as "necessarily P {\displaystyle P} ". In the standard relational semantics for modal logic, formulas are assigned truth values relative to a possible world . A formula's truth value at one possible world can depend on the truth values of other formulas at other accessible possible worlds . In particular, ◊ P {\displaystyle \Diamond P} is true at a world if P {\displaystyle P} is true at some accessible possible world, while ◻ P {\displaystyle \Box P} is true at a world if P {\displaystyle P} is true at every accessible possible world. A variety of proof systems exist which are sound and complete with respect to the semantics one gets by restricting the accessibility relation. For instance, the deontic modal logic D is sound and complete if one requires the accessibility relation to be serial .
While the intuition behind modal logic dates back to antiquity, the first modal axiomatic systems were developed by C. I. Lewis in 1912. The now-standard relational semantics emerged in the mid twentieth century from work by Arthur Prior , Jaakko Hintikka , and Saul Kripke . Recent developments include alternative topological semantics such as neighborhood semantics as well as applications of the relational semantics beyond its original philosophical motivation. [ 1 ] Such applications include game theory , [ 2 ] moral and legal theory , [ 2 ] web design , [ 2 ] multiverse-based set theory , [ 3 ] and social epistemology . [ 4 ]
Modal logic differs from other kinds of logic in that it uses modal operators such as ◻ {\displaystyle \Box } and ◊ {\displaystyle \Diamond } . The former is conventionally read aloud as "necessarily", and can be used to represent notions such as moral or legal obligation , knowledge , historical inevitability , among others. The latter is typically read as "possibly" and can be used to represent notions including permission , ability , compatibility with evidence . While well-formed formulas of modal logic include non-modal formulas such as P ∧ Q {\displaystyle P\land Q} , it also contains modal ones such as ◻ ( P ∧ Q ) {\displaystyle \Box (P\land Q)} , P ∧ ◻ Q {\displaystyle P\land \Box Q} , ◻ ( ◊ P ∧ ◊ Q ) {\displaystyle \Box (\Diamond P\land \Diamond Q)} , and so on.
Thus, the language L {\displaystyle {\mathcal {L}}} of basic propositional logic can be defined recursively as follows.
Modal operators can be added to other kinds of logic by introducing rules analogous to #4 and #5 above. Modal predicate logic is one widely used variant which includes formulas such as ∀ x ◊ P ( x ) {\displaystyle \forall x\Diamond P(x)} . In systems of modal logic where ◻ {\displaystyle \Box } and ◊ {\displaystyle \Diamond } are duals , ◻ ϕ {\displaystyle \Box \phi } can be taken as an abbreviation for ¬ ◊ ¬ ϕ {\displaystyle \neg \Diamond \neg \phi } , thus eliminating the need for a separate syntactic rule to introduce it. However, separate syntactic rules are necessary in systems where the two operators are not interdefinable.
Common notational variants include symbols such as [ K ] {\displaystyle [K]} and ⟨ K ⟩ {\displaystyle \langle K\rangle } in systems of modal logic used to represent knowledge and [ B ] {\displaystyle [B]} and ⟨ B ⟩ {\displaystyle \langle B\rangle } in those used to represent belief. These notations are particularly common in systems which use multiple modal operators simultaneously. For instance, a combined epistemic-deontic logic could use the formula [ K ] ⟨ D ⟩ P {\displaystyle [K]\langle D\rangle P} read as "I know P is permitted". Systems of modal logic can include infinitely many modal operators distinguished by indices, i.e. ◻ 1 {\displaystyle \Box _{1}} , ◻ 2 {\displaystyle \Box _{2}} , ◻ 3 {\displaystyle \Box _{3}} , and so on.
The standard semantics for modal logic is called the relational semantics . In this approach, the truth of a formula is determined relative to a point which is often called a possible world . For a formula that contains a modal operator, its truth value can depend on what is true at other accessible worlds. Thus, the relational semantics interprets formulas of modal logic using models defined as follows. [ 5 ]
The set W {\displaystyle W} is often called the universe . The binary relation R {\displaystyle R} is called an accessibility relation , and it controls which worlds can "see" each other for the sake of determining what is true. For example, w R u {\displaystyle wRu} means that the world u {\displaystyle u} is accessible from world w {\displaystyle w} . That is to say, the state of affairs known as u {\displaystyle u} is a live possibility for w {\displaystyle w} . Finally, the function V {\displaystyle V} is known as a valuation function . It determines which atomic formulas are true at which worlds.
Then we recursively define the truth of a formula at a world w {\displaystyle w} in a model M {\displaystyle {\mathfrak {M}}} :
According to this semantics, a formula is necessary with respect to a world w {\displaystyle w} if it holds at every world that is accessible from w {\displaystyle w} . It is possible if it holds at some world that is accessible from w {\displaystyle w} . Possibility thereby depends upon the accessibility relation R {\displaystyle R} , which allows us to express the relative nature of possibility. For example, we might say that given our laws of physics it is not possible for humans to travel faster than the speed of light, but that given other circumstances it could have been possible to do so. Using the accessibility relation we can translate this scenario as follows: At all of the worlds accessible to our own world, it is not the case that humans can travel faster than the speed of light, but at one of these accessible worlds there is another world accessible from those worlds but not accessible from our own at which humans can travel faster than the speed of light.
The choice of accessibility relation alone can sometimes be sufficient to guarantee the truth or falsity of a formula. For instance, consider a model M {\displaystyle {\mathfrak {M}}} whose accessibility relation is reflexive . Because the relation is reflexive, we will have that M , w ⊨ P → ◊ P {\displaystyle {\mathfrak {M}},w\models P\rightarrow \Diamond P} for any w ∈ G {\displaystyle w\in G} regardless of which valuation function is used. For this reason, modal logicians sometimes talk about frames , which are the portion of a relational model excluding the valuation function.
The different systems of modal logic are defined using frame conditions . A frame is called:
The logics that stem from these frame conditions are:
The Euclidean property along with reflexivity yields symmetry and transitivity. (The Euclidean property can be obtained, as well, from symmetry and transitivity.) Hence if the accessibility relation R is reflexive and Euclidean, R is provably symmetric and transitive as well. Hence for models of S5, R is an equivalence relation , because R is reflexive, symmetric and transitive.
We can prove that these frames produce the same set of valid sentences as do the frames where all worlds can see all other worlds of W ( i.e. , where R is a "total" relation). This gives the corresponding modal graph which is total complete ( i.e. , no more edges (relations) can be added). For example, in any modal logic based on frame conditions:
If we consider frames based on the total relation we can just say that
We can drop the accessibility clause from the latter stipulation because in such total frames it is trivially true of all w and u that w R u . But this does not have to be the case in all S5 frames, which can still consist of multiple parts that are fully connected among themselves but still disconnected from each other.
All of these logical systems can also be defined axiomatically, as is shown in the next section. For example, in S5, the axioms P ⟹ ◻ ◊ P {\displaystyle P\implies \Box \Diamond P} , ◻ P ⟹ ◻ ◻ P {\displaystyle \Box P\implies \Box \Box P} and ◻ P ⟹ P {\displaystyle \Box P\implies P} (corresponding to symmetry , transitivity and reflexivity , respectively) hold, whereas at least one of these axioms does not hold in each of the other, weaker logics.
Modal logic has also been interpreted using topological structures. For instance, the Interior Semantics interprets formulas of modal logic as follows.
A topological model is a tuple X = ⟨ X , τ , V ⟩ {\displaystyle \mathrm {X} =\langle X,\tau ,V\rangle } where ⟨ X , τ ⟩ {\displaystyle \langle X,\tau \rangle } is a topological space and V {\displaystyle V} is a valuation function which maps each atomic formula to some subset of X {\displaystyle X} . The basic interior semantics interprets formulas of modal logic as follows:
Topological approaches subsume relational ones, allowing non-normal modal logics . The extra structure they provide also allows a transparent way of modeling certain concepts such as the evidence or justification one has for one's beliefs. Topological semantics is widely used in recent work in formal epistemology and has antecedents in earlier work such as David Lewis and Angelika Kratzer 's logics for counterfactuals .
The first formalizations of modal logic were axiomatic . Numerous variations with very different properties have been proposed since C. I. Lewis began working in the area in 1912. Hughes and Cresswell (1996), for example, describe 42 normal and 25 non-normal modal logics. Zeman (1973) describes some systems Hughes and Cresswell omit.
Modern treatments of modal logic begin by augmenting the propositional calculus with two unary operations, one denoting "necessity" and the other "possibility". The notation of C. I. Lewis , much employed since, denotes "necessarily p " by a prefixed "box" (□ p ) whose scope is established by parentheses. Likewise, a prefixed "diamond" (◇ p ) denotes "possibly p ". Similar to the quantifiers in first-order logic , "necessarily p " (□ p ) does not assume the range of quantification (the set of accessible possible worlds in Kripke semantics ) to be non-empty, whereas "possibly p " (◇ p ) often implicitly assumes ◊ ⊤ {\displaystyle \Diamond \top } (viz. the set of accessible possible worlds is non-empty). Regardless of notation, each of these operators is definable in terms of the other in classical modal logic:
Hence □ and ◇ form a dual pair of operators.
In many modal logics, the necessity and possibility operators satisfy the following analogues of de Morgan's laws from Boolean algebra :
Precisely what axioms and rules must be added to the propositional calculus to create a usable system of modal logic is a matter of philosophical opinion, often driven by the theorems one wishes to prove; or, in computer science, it is a matter of what sort of computational or deductive system one wishes to model. Many modal logics, known collectively as normal modal logics , include the following rule and axiom:
The weakest normal modal logic , named " K " in honor of Saul Kripke , is simply the propositional calculus augmented by □, the rule N , and the axiom K . K is weak in that it fails to determine whether a proposition can be necessary but only contingently necessary. That is, it is not a theorem of K that if □ p is true then □□ p is true, i.e., that necessary truths are "necessarily necessary". If such perplexities are deemed forced and artificial, this defect of K is not a great one. In any case, different answers to such questions yield different systems of modal logic.
Adding axioms to K gives rise to other well-known modal systems. One cannot prove in K that if " p is necessary" then p is true. The axiom T remedies this defect:
T holds in most but not all modal logics. Zeman (1973) describes a few exceptions, such as S1 0 .
Other well-known elementary axioms are:
These yield the systems (axioms in bold, systems in italics):
K through S5 form a nested hierarchy of systems, making up the core of normal modal logic . But specific rules or sets of rules may be appropriate for specific systems. For example, in deontic logic , ◻ p → ◊ p {\displaystyle \Box p\to \Diamond p} (If it ought to be that p , then it is permitted that p ) seems appropriate, but we should probably not include that p → ◻ ◊ p {\displaystyle p\to \Box \Diamond p} . In fact, to do so is to commit the naturalistic fallacy (i.e. to state that what is natural is also good, by saying that if p is the case, p ought to be permitted).
The commonly employed system S5 simply makes all modal truths necessary. For example, if p is possible, then it is "necessary" that p is possible. Also, if p is necessary, then it is necessary that p is necessary. Other systems of modal logic have been formulated, in part because S5 does not describe every kind of modality of interest.
Sequent calculi and systems of natural deduction have been developed for several modal logics, but it has proven hard to combine generality with other features expected of good structural proof theories , such as purity (the proof theory does not introduce extra-logical notions such as labels) and analyticity (the logical rules support a clean notion of analytic proof ). More complex calculi have been applied to modal logic to achieve generality. [ citation needed ]
Analytic tableaux provide the most popular decision method for modal logics. [ 6 ]
Modalities of necessity and possibility are called alethic modalities. They are also sometimes called special modalities, from the Latin species . Modal logic was first developed to deal with these concepts, and only afterward was extended to others. For this reason, or perhaps for their familiarity and simplicity, necessity and possibility are often casually treated as the subject matter of modal logic. Moreover, it is easier to make sense of relativizing necessity, e.g. to legal, physical, nomological , epistemic , and so on, than it is to make sense of relativizing other notions.
In classical modal logic , a proposition is said to be
In classical modal logic, therefore, the notion of either possibility or necessity may be taken to be basic, where these other notions are defined in terms of it in the manner of De Morgan duality . Intuitionistic modal logic treats possibility and necessity as not perfectly symmetric.
For example, suppose that while walking to the convenience store we pass Friedrich's house, and observe that the lights are off. On the way back, we observe that they have been turned on.
(Of course, this analogy does not apply alethic modality in a truly rigorous fashion; for it to do so, it would have to axiomatically make such statements as "human beings cannot rise from the dead", "Socrates was a human being and not an immortal vampire", and "we did not take hallucinogenic drugs which caused us to falsely believe the lights were on", ad infinitum . Absolute certainty of truth or falsehood exists only in the sense of logically constructed abstract concepts such as "it is impossible to draw a triangle with four sides" and "all bachelors are unmarried".)
For those having difficulty with the concept of something being possible but not true, the meaning of these terms may be made more comprehensible by thinking of multiple "possible worlds" (in the sense of Leibniz ) or "alternate universes"; something "necessary" is true in all possible worlds, something "possible" is true in at least one possible world.
Something is physically, or nomically, possible if it is permitted by the laws of physics . [ citation needed ] For example, current theory is thought to allow for there to be an atom with an atomic number of 126, [ 7 ] even if there are no such atoms in existence. In contrast, while it is logically possible to accelerate beyond the speed of light , [ 8 ] modern science stipulates that it is not physically possible for material particles or information. [ 9 ] Physical possibility does not coincide with empirical possibility. [ 10 ]
Philosophers [ who? ] debate if objects have properties independent of those dictated by scientific laws. For example, it might be metaphysically necessary, as some who advocate physicalism have thought, that all thinking beings have bodies [ 11 ] and can experience the passage of time . Saul Kripke has argued that every person necessarily has the parents they do have: anyone with different parents would not be the same person. [ 12 ]
Metaphysical possibility has been thought to be more restricting than bare logical possibility [ 13 ] (i.e., fewer things are metaphysically possible than are logically possible). However, its exact relation (if any) to logical possibility or to physical possibility is a matter of dispute. Philosophers [ who? ] also disagree over whether metaphysical truths are necessary merely "by definition", or whether they reflect some underlying deep facts about the world, or something else entirely.
Epistemic modalities (from the Greek episteme , knowledge), deal with the certainty of sentences. The □ operator is translated as "x is certain that…", and the ◇ operator is translated as "For all x knows, it may be true that…" In ordinary speech both metaphysical and epistemic modalities are often expressed in similar words; the following contrasts may help:
A person, Jones, might reasonably say both : (1) "No, it is not possible that Bigfoot exists; I am quite certain of that"; and , (2) "Sure, it's possible that Bigfoots could exist". What Jones means by (1) is that, given all the available information, there is no question remaining as to whether Bigfoot exists. This is an epistemic claim. By (2) he makes the metaphysical claim that it is possible for Bigfoot to exist, even though he does not : there is no physical or biological reason that large, featherless, bipedal creatures with thick hair could not exist in the forests of North America (regardless of whether or not they do). Similarly, "it is possible for the person reading this sentence to be fourteen feet tall and named Chad" is metaphysically true (such a person would not somehow be prevented from doing so on account of their height and name), but not alethically true unless you match that description, and not epistemically true if it is known that fourteen-foot-tall human beings have never existed.
From the other direction, Jones might say, (3) "It is possible that Goldbach's conjecture is true; but also possible that it is false", and also (4) "if it is true, then it is necessarily true, and not possibly false". Here Jones means that it is epistemically possible that it is true or false, for all he knows (Goldbach's conjecture has not been proven either true or false), but if there is a proof (heretofore undiscovered), then it would show that it is not logically possible for Goldbach's conjecture to be false—there could be no set of numbers that violated it. Logical possibility is a form of alethic possibility; (4) makes a claim about whether it is possible (i.e., logically speaking) that a mathematical truth to have been false, but (3) only makes a claim about whether it is possible, for all Jones knows, (i.e., speaking of certitude) that the mathematical claim is specifically either true or false, and so again Jones does not contradict himself. It is worthwhile to observe that Jones is not necessarily correct: It is possible (epistemically) that Goldbach's conjecture is both true and unprovable.
Epistemic possibilities also bear on the actual world in a way that metaphysical possibilities do not. Metaphysical possibilities bear on ways the world might have been, but epistemic possibilities bear on the way the world may be (for all we know). Suppose, for example, that I want to know whether or not to take an umbrella before I leave. If you tell me "it is possible that it is raining outside" – in the sense of epistemic possibility – then that would weigh on whether or not I take the umbrella. But if you just tell me that "it is possible for it to rain outside" – in the sense of metaphysical possibility – then I am no better off for this bit of modal enlightenment.
Some features of epistemic modal logic are in debate. For example, if x knows that p , does x know that it knows that p ? That is to say, should □ P → □□ P be an axiom in these systems? While the answer to this question is unclear, [ 14 ] there is at least one axiom that is generally included in epistemic modal logic, because it is minimally true of all normal modal logics (see the section on axiomatic systems ):
It has been questioned whether the epistemic and alethic modalities should be considered distinct from each other. The criticism states that there is no real difference between "the truth in the world" (alethic) and "the truth in an individual's mind" (epistemic). [ 15 ] An investigation has not found a single language in which alethic and epistemic modalities are formally distinguished, as by the means of a grammatical mood . [ 16 ]
Temporal logic is an approach to the semantics of expressions with tense , that is, expressions with qualifications of when. Some expressions, such as '2 + 2 = 4', are true at all times, while tensed expressions such as 'John is happy' are only true sometimes.
In temporal logic, tense constructions are treated in terms of modalities, where a standard method for formalizing talk of time is to use two pairs of operators, one for the past and one for the future (P will just mean 'it is presently the case that P'). For example:
There are then at least three modal logics that we can develop. For example, we can stipulate that,
Or we can trade these operators to deal only with the future (or past). For example,
or,
The operators F and G may seem initially foreign, but they create normal modal systems . F P is the same as ¬ G ¬ P . We can combine the above operators to form complex statements. For example, P P → □ P P says (effectively), Everything that is past and true is necessary .
It seems reasonable to say that possibly it will rain tomorrow, and possibly it will not; on the other hand, since we cannot change the past, if it is true that it rained yesterday, it cannot be true that it may not have rained yesterday. It seems the past is "fixed", or necessary, in a way the future is not. This is sometimes referred to as accidental necessity . But if the past is "fixed", and everything that is in the future will eventually be in the past, then it seems plausible to say that future events are necessary too.
Similarly, the problem of future contingents considers the semantics of assertions about the future: is either of the propositions 'There will be a sea battle tomorrow', or 'There will not be a sea battle tomorrow' now true? Considering this thesis led Aristotle to reject the principle of bivalence for assertions concerning the future.
Additional binary operators are also relevant to temporal logics (see Linear temporal logic ).
Versions of temporal logic can be used in computer science to model computer operations and prove theorems about them. In one version, ◇ P means "at a future time in the computation it is possible that the computer state will be such that P is true"; □ P means "at all future times in the computation P will be true". In another version, ◇ P means "at the immediate next state of the computation, P might be true"; □ P means "at the immediate next state of the computation, P will be true". These differ in the choice of Accessibility relation . (P always means "P is true at the current computer state".) These two examples involve nondeterministic or not-fully-understood computations; there are many other modal logics specialized to different types of program analysis. Each one naturally leads to slightly different axioms.
Likewise talk of morality, or of obligation and norms generally, seems to have a modal structure. The difference between "You must do this" and "You may do this" looks a lot like the difference between "This is necessary" and "This is possible". Such logics are called deontic , from the Greek for "duty".
Deontic logics commonly lack the axiom T semantically corresponding to the reflexivity of the accessibility relation in Kripke semantics : in symbols, ◻ ϕ → ϕ {\displaystyle \Box \phi \to \phi } . Interpreting □ as "it is obligatory that", T informally says that every obligation is true. For example, if it is obligatory not to kill others (i.e. killing is morally forbidden), then T implies that people actually do not kill others. The consequent is obviously false.
Instead, using Kripke semantics , we say that though our own world does not realize all obligations, the worlds accessible to it do (i.e., T holds at these worlds). These worlds are called idealized worlds. P is obligatory with respect to our own world if at all idealized worlds accessible to our world, P holds. Though this was one of the first interpretations of the formal semantics, it has recently come under criticism. [ 17 ]
One other principle that is often (at least traditionally) accepted as a deontic principle is D , ◻ ϕ → ◊ ϕ {\displaystyle \Box \phi \to \Diamond \phi } , which corresponds to the seriality (or extendability or unboundedness) of the accessibility relation. It is an embodiment of the Kantian idea that "ought implies can". (Clearly the "can" can be interpreted in various senses, e.g. in a moral or alethic sense.)
When we try to formalize ethics with standard modal logic, we run into some problems. Suppose that we have a proposition K : you have stolen some money, and another, Q : you have stolen a small amount of money. Now suppose we want to express the thought that "if you have stolen some money, it ought to be a small amount of money". There are two likely candidates,
But (1) and K together entail □ Q , which says that it ought to be the case that you have stolen a small amount of money. This surely is not right, because you ought not to have stolen anything at all. And (2) does not work either: If the right representation of "if you have stolen some money it ought to be a small amount" is (2), then the right representation of (3) "if you have stolen some money then it ought to be a large amount" is ◻ ( K → ( K ∧ ¬ Q ) ) {\displaystyle \Box (K\to (K\land \lnot Q))} . Now suppose (as seems reasonable) that you ought not to steal anything, or ◻ ¬ K {\displaystyle \Box \lnot K} . But then we can deduce ◻ ( K → ( K ∧ ¬ Q ) ) {\displaystyle \Box (K\to (K\land \lnot Q))} via ◻ ( ¬ K ) → ◻ ( K → K ∧ ¬ K ) {\displaystyle \Box (\lnot K)\to \Box (K\to K\land \lnot K)} and ◻ ( K ∧ ¬ K → ( K ∧ ¬ Q ) ) {\displaystyle \Box (K\land \lnot K\to (K\land \lnot Q))} (the contrapositive of Q → K {\displaystyle Q\to K} ); so sentence (3) follows from our hypothesis (of course the same logic shows sentence (2)). But that cannot be right, and is not right when we use natural language. Telling someone they should not steal certainly does not imply that they should steal large amounts of money if they do engage in theft. [ 18 ]
Doxastic logic concerns the logic of belief (of some set of agents). The term doxastic is derived from the ancient Greek doxa which means "belief". Typically, a doxastic logic uses □, often written "B", to mean "It is believed that", or when relativized to a particular agent s, "It is believed by s that".
Modal logics may be extended to fuzzy form with calculi in the class of fuzzy Kripke models. [ 19 ]
Modal logics may also be enhanced via base-extension semantics for the classical propositional systems. In this case, the validity of a formula can be shown by an inductive definition generated by provability in a ‘base’ of atomic rules. [ 20 ]
Intuitionistic modal logics are used in different areas of application, and they have often risen from different sources. The areas include the foundations of mathematics, computer science and philosophy. In these approaches, often modalities are added to intuitionistic logic to create new intuitionistic connectives and to simulate the monadic elements of intuitionistic first order logic . [ 21 ]
In the most common interpretation of modal logic, one considers " logically possible worlds". If a statement is true in all possible worlds , then it is a necessary truth. If a statement happens to be true in our world, but is not true in all possible worlds, then it is a contingent truth. A statement that is true in some possible world (not necessarily our own) is called a possible truth.
Under this "possible worlds idiom", to maintain that Bigfoot's existence is possible but not actual, one says, "There is some possible world in which Bigfoot exists; but in the actual world, Bigfoot does not exist". However, it is unclear what this claim commits us to. Are we really alleging the existence of possible worlds, every bit as real as our actual world, just not actual? Saul Kripke believes that 'possible world' is something of a misnomer – that the term 'possible world' is just a useful way of visualizing the concept of possibility. [ 22 ] For him, the sentences "you could have rolled a 4 instead of a 6" and "there is a possible world where you rolled a 4, but you rolled a 6 in the actual world" are not significantly different statements, and neither commit us to the existence of a possible world. [ 23 ] David Lewis , on the other hand, made himself notorious by biting the bullet, asserting that all merely possible worlds are as real as our own, and that what distinguishes our world as actual is simply that it is indeed our world – this world. [ 24 ] That position is a major tenet of " modal realism ". Some philosophers decline to endorse any version of modal realism, considering it ontologically extravagant, and prefer to seek various ways to paraphrase away these ontological commitments. Robert Adams holds that 'possible worlds' are better thought of as 'world-stories', or consistent sets of propositions. Thus, it is possible that you rolled a 4 if such a state of affairs can be described coherently. [ 25 ]
Computer scientists will generally pick a highly specific interpretation of the modal operators specialized to the particular sort of computation being analysed. In place of "all worlds", you may have "all possible next states of the computer", or "all possible future states of the computer".
Modal logics have begun to be used in areas of the humanities such as literature, poetry, art and history. [ 26 ] [ 27 ] In the philosophy of religion , modal logics are commonly used in arguments for the existence of God . [ 28 ]
The basic ideas of modal logic date back to antiquity. Aristotle developed a modal syllogistic in Book I of his Prior Analytics (ch. 8–22), which Theophrastus attempted to improve. [ 29 ] There are also passages in Aristotle's work, such as the famous sea-battle argument in De Interpretatione §9, that are now seen as anticipations of the connection of modal logic with potentiality and time. In the Hellenistic period, the logicians Diodorus Cronus , Philo the Dialectician and the Stoic Chrysippus each developed a modal system that accounted for the interdefinability of possibility and necessity, accepted axiom T (see below ), and combined elements of modal logic and temporal logic in attempts to solve the notorious Master Argument . [ 30 ] The earliest formal system of modal logic was developed by Avicenna , who ultimately developed a theory of " temporally modal" syllogistic. [ 31 ] Modal logic as a self-aware subject owes much to the writings of the Scholastics , in particular William of Ockham and John Duns Scotus , who reasoned informally in a modal manner, mainly to analyze statements about essence and accident .
In the 19th century, Hugh MacColl made innovative contributions to modal logic, but did not find much acknowledgment. [ 32 ] C. I. Lewis founded modern modal logic in a series of scholarly articles beginning in 1912 with "Implication and the Algebra of Logic". [ 33 ] [ 34 ] Lewis was led to invent modal logic, and specifically strict implication , on the grounds that classical logic grants paradoxes of material implication such as the principle that a falsehood implies any proposition . [ 35 ] This work culminated in his 1932 book Symbolic Logic (with C. H. Langford ), [ 36 ] which introduced the five systems S1 through S5 .
After Lewis, modal logic received little attention for several decades. Nicholas Rescher has argued that this was because Bertrand Russell rejected it. [ 37 ] However, Jan Dejnozka has argued against this view, stating that a modal system which Dejnozka calls "MDL" is described in Russell's works, although Russell did believe the concept of modality to "come from confusing propositions with propositional functions ", as he wrote in The Analysis of Matter . [ 38 ]
Ruth C. Barcan (later Ruth Barcan Marcus ) developed the first axiomatic systems of quantified modal logic — first and second order extensions of Lewis' S2 , S4 , and S5 . [ 39 ] [ 40 ] [ 41 ] Arthur Norman Prior warned her to prepare well in the debates concerning quantified modal logic with Willard Van Orman Quine , because of bias against modal logic. [ 42 ]
The contemporary era in modal semantics began in 1959, when Saul Kripke (then only a 18-year-old Harvard University undergraduate) introduced the now-standard Kripke semantics for modal logics. These are commonly referred to as "possible worlds" semantics. Kripke and A. N. Prior had previously corresponded at some length. Kripke semantics is basically simple, but proofs are eased using semantic-tableaux or analytic tableaux , as explained by E. W. Beth .
A. N. Prior created modern temporal logic , closely related to modal logic, in 1957 by adding modal operators [F] and [P] meaning "eventually" and "previously". Vaughan Pratt introduced dynamic logic in 1976. In 1977, Amir Pnueli proposed using temporal logic to formalise the behaviour of continually operating concurrent programs . Flavors of temporal logic include propositional dynamic logic (PDL), (propositional) linear temporal logic (LTL), computation tree logic (CTL), Hennessy–Milner logic , and T . [ clarification needed ]
The mathematical structure of modal logic, namely Boolean algebras augmented with unary operations (often called modal algebras ), began to emerge with J. C. C. McKinsey 's 1941 proof that S2 and S4 are decidable, [ 43 ] and reached full flower in the work of Alfred Tarski and his student Bjarni Jónsson (Jónsson and Tarski 1951–52). This work revealed that S4 and S5 are models of interior algebra , a proper extension of Boolean algebra originally designed to capture the properties of the interior and closure operators of topology . Texts on modal logic typically do little more than mention its connections with the study of Boolean algebras and topology . For a thorough survey of the history of formal modal logic and of the associated mathematics, see Robert Goldblatt (2006). [ 44 ] | https://en.wikipedia.org/wiki/Modal_logic |
A modal share (also called mode split , mode-share , or modal split ) is the percentage of travelers using a particular type of transportation or number of trips using said type. [ 1 ] In freight transportation , this may be measured in mass .
Modal share is an important component in developing sustainable transport within a city or region. In recent years, many cities have set modal share targets for balanced and sustainable transport modes, particularly 30% of non-motorized ( cycling and walking ) and 30% of public transport . These goals reflect a desire for a modal shift , or a change between modes, and usually encompasses an increase in the proportion of trips made using sustainable modes. [ 2 ]
Modal share data is usually obtained by travel surveys, which are often conducted by local governments, using different methodologies. Sampling and interviewing techniques, definitions, the extent of geographical areas and other methodological differences can influence comparability. Most typical surveys refer to the main mode of transport used during trips to work. [ 3 ] Surveys covering entire metropolitan areas are preferred over city proper surveys which typically cover only the denser inner city.
The following tables present the modal split of journeys to work. Note that it is better to use a measure of all trips on a typical weekday, but journey to work data is more readily available. It would also be beneficial to disaggregate private motor vehicles figures to car driver, car passengers and motorbikes (especially relevant for Asian cities).
Notes: European data is based on the Urban Audit [ 103 ]
The Charter of Brussels [ broken anchor ] , signed by 36 cities including Brussels, Ghent, Milan, Munich, Seville, Edinburgh, Toulouse, Bordeaux, Gdansk, and Timișoara, commits the signatories to achieve at least 15% of bicycling modal share by 2020, and calls upon European institutions to do likewise. [ 104 ] The cycling modal share is strongly associated with the size of local cycling infrastructure . [ 105 ]
The Canadian city of Hamilton adopted a similar modal share target plan in 2005. [ 106 ]
The modal share differs considerably depending on each city in the developing world. [ 107 ] [ 108 ] [ 109 ]
According to UNECE, the global on-road vehicle fleet is to double by 2050 (from 1,2 billion to 2,5 billion, [ 110 ] see introduction), with most future car purchases taking place in developing countries. Some experts even mention that the number of vehicles in developing countries will increase by 4 or 5-fold by 2050 (compared to current car use levels), and that the majority of these will be second-hand . [ 13 ] [ 111 ]
Legislation can discourage car ownership through, for example, taxation and conditions on new car purchases ). This could help in achieving a modal shift. [ 112 ] | https://en.wikipedia.org/wiki/Modal_share |
In formal semantics and pragmatics , modal subordination is the phenomenon whereby a modal expression is interpreted relative to another modal expression to which it is not syntactically subordinate. [ 1 ] [ 2 ] For instance, the following example does not assert that the birds will in fact be hungry, but rather that hungry birds would be a consequence of Joan forgetting to fill the birdfeeder. This interpretation was unexpected in early theories of the syntax-semantics interface since the content concerning the birds' hunger occurs in a separate sentence from the if -clause. [ 1 ]
Instances of modal subordination have been found in a variety of languages with a variety of other modal operators including epistemic modal auxiliaries, deontic modal auxiliaries, negation , habituals , and evidentials . Some English examples are given below.
The phenomenon modal subordination was discovered and first analyzed by Craige Roberts . In her 1987 dissertation, she argued against an analysis where the modally subordinated content is inserted under the semantic scope of the earlier modal expression, instead proposing that the phenomenon be understood in terms of implicit domain restriction. On this account, the compositional semantics does not fully determine the domain of quantification for modal expressions, and modal subordination is the result of the domain for the later expression being identified with that of the earlier one. [ 7 ] [ 8 ] Subsequent work has argued that Roberts' original account is too unconstrained and thus wrongly predicts that modal subordination should be possible in cases where the data suggest it is impossible. Many recent analyses treat modal subordination as a form of anaphora involving propositional or temporal discourse referents. Such accounts are often couched in variants of dynamic semantics such as DRT and SDRT . [ 9 ]
This semantics article is a stub . You can help Wikipedia by expanding it .
This pragmatics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Modal_subordination |
Modbus or MODBUS is a client/server data communications protocol in the application layer . [ 1 ] It was originally designed for use with programmable logic controllers (PLCs), [ 2 ] but has become a de facto standard communication protocol for communication between industrial electronic devices in a wide range of buses and networks. [ 3 ] [ 1 ]
Modbus is popular in industrial environments because it is openly published and royalty-free . It was developed for industrial applications, is relatively easy to deploy and maintain compared to other standards, and places few restrictions on the format of the data to be transmitted.
The Modbus protocol uses serial communication lines , Ethernet , or the Internet protocol suite as a transport layer. [ 1 ] Modbus supports communication to and from multiple devices connected to the same cable or Ethernet network. For example, there can be a device that measures temperature and another device to measure humidity connected to the same cable, both communicating measurements to the same computer , via Modbus.
Modbus is often used to connect a plant/system supervisory computer with a remote terminal unit (RTU) in supervisory control and data acquisition ( SCADA ) systems. Many of the data types are named from industrial control of factory devices, such as ladder logic because of its use in driving relays: a single-bit physical output is called a coil , and a single-bit physical input is called a discrete input or a contact .
It was originally published by Modicon in 1979. The company was acquired by Schneider Electric in 1997. In 2004, they transferred the rights to the Modbus Organization [ 4 ] which is a trade association of users and suppliers of Modbus-compliant devices that advocates for the continued use of the technology. [ 5 ]
Modbus standards or buses include: [ 1 ]
To support Modbus communication on a network, many modems and gateways incorporate proprietary designs (refer to the diagram: Architecture of a network for Modbus communication ). Implementations may deploy either wireline or wireless communication, such as in the ISM radio band , and even Short Message Service (SMS) or General Packet Radio Service (GPRS).
Modbus defines a client which is an entity that initiates a transaction to request any specific task from its request receiver . [ 6 ] The client's "request receiver", which the client has initiated the transaction with, is then called the server . [ 6 ] For example, when a Microcontroller unit (MCU) connects to a sensor to read its data by Modbus on a wired network, e.g RS485 bus, the MCU in this context is the client and the sensor is the server. In former terminology, the client was named master and the server named slave.
Modbus defines a protocol data unit (PDU) independently to its lower layer protocols in its protocol stack. Mapping MODBUS protocol on specific buses or networks requires some additional fields, defined as the application data unit (ADU). The ADU is formed by a client inside a Modbus network when the client initiates a transaction. Contents are: [ 7 ]
The ADU is officially called a Modbus frame by the Modbus Organization, [ 7 ] although frame is used as the data unit in the data-link layer in the OSI and TCP/IP model (while Modbus is an application layer protocol).
PDU max size is 253 bytes. ADU max size on RS232/RS485 network is 256 bytes, and with TCP is 260 bytes. [ 8 ]
For data encoding, Modbus uses a big-endian representation for addresses and data fields. Thus, for a 16-bit value, the most significant byte is sent first. For example, when a 16-bit register has value 0x1234, byte 0x12 is sent before byte 0x34. [ 8 ]
Function code is 1 byte which gives the code of the function to execute. Function codes are integer values, ranging from 1 to 255, and the range from 128 to 255 is for exception responses.
The data field of the PDU has the address from 0 to 65535 (not to be confused with the address of the Additional address field of ADU). [ 9 ] The data field of the PDU can be empty, and then has a size of 0. In this case, the server will not request any information and the function code defines the function to be executed. If there is no error during the execution process, the data field of the ADU response from server to client will include the data requested, i.e. the data the client previously received. If there is any error, the server will respond with an exception code. [ 6 ]
A Modbus transaction between client and server includes: [ 6 ] [ 10 ]
Based on that, Modbus defines 3 PDU types: [ 8 ]
Modbus defines its data model based on a series of tables of four primary types: [ 11 ]
For each of the primary tables, the protocol allows individual selection of 65536 data items,
and the operations of read or write of those items are designed to span multiple consecutive
data items up to a data size limit which is dependent on the transaction function code. [ 11 ]
Modbus defines three types of function codes: Public, User-Defined and Reserved. [ 13 ]
Note: Some sources use terminology that differs from the standard; for example Force Single Coil instead of Write Single Coil . [ 14 ]
Function code 01 (read coils) allows reading the state from 1 to 2000 coils of a remote device. mb_req_pdu (request PDU) will then have 2 bytes to indicate the address of the first coil to read (from 0x0000 to 0xFFFF), and 2 bytes to indicate the number of coils to read. mb_req_pdu defines coil address by index 0, i.e the first coil has address 0x0. On a successful execution, mb_rsp_pdu will return one byte to note the function code (0x01), followed by one byte to indicate the number of data bytes it is returning (n), which will be the number of coils requested by mb_req_pdu, divided by 8 bits per byte, and rounded up. The remainder of the response will be the specified number (n) of data bytes. [ 15 ] That is, the mb_req_pdu and mb_rsp_pdu of function code 01 will take the following form: [ 15 ]
For instance, mb_req_pdu and mb_rsp_pdu to read coils status from 20-38 will be: [ 16 ]
User-Defined Function Codes are function codes defined by users. Modbus gives two range of values for user-defined function codes: 65 to 72 and 100 to 110. Obviously, user-defined function codes are not unique. [ 13 ]
Reserved Function Codes are function codes used by some companies for legacy product and are not available for public use. [ 13 ]
When a client sends a request to a server, there can be four possible events for that request: [ 17 ]
Exception response message includes two other fields when compared to a normal response message: [ 17 ]
All Modbus exception code: [ 18 ]
Modbus standard also defines Modbus over Serial Line, a protocol over the data link layer of the OSI model for the Modbus application layer protocol to be communicated over a serial bus . [ 19 ] Modbus Serial Line protocol is a master-slave protocol which supports one master and multiple slaves in the serial bus. [ 20 ] With Modbus protocol on the application layer, client/server model is used for the devices on the communication channel. With Modbus over Serial Line, client 's role is implemented by master , and the server 's role is implemented by slave . [ 20 ] [ 21 ]
The organization's naming convention inverts the common usage of having multiple clients and only one server. To avoid this confusion, the RS-485 transport layer uses the terms "node" or "device" instead of "server", and the "client" is not a "node". [ 21 ]
The (Modbus Organization) is using "client-server" to describe Modbus communications, characterized by communication between [client device (s), which initiates communication and makes requests of server device(s), which process requests and return an appropriate response (or error message).
A serial bus for Modbus over Serial Line can have a maximum of 247 slaves communicating with one master. Those slaves have a unique address ranging from 1 to 247 (decimal value). [ 22 ] The master doesn't need to have an address. [ 22 ] The communication process is initiated by the master, as only it can initiate a Modbus transaction. A slave will never transmit any data or perform any action without a request from the master, and slaves cannot communicate with each other. [ 23 ]
In Modbus over Serial Line, the master initiates requests to the slaves in unicast or broadcast modes. In unicast mode , the master will initiate a request to a single slave with a specific address. Upon receiving and finishing the request, the slave will respond with a message to the master. [ 22 ] In this mode, a Modbus transaction includes two messages: one request from the master and one reply from the slave. Each slave must have a unique address (from 1 to 247) to be addressed independently for the communication. [ 22 ] In broadcast mode , the master can send a request to all the slaves, using the broadcast address 0, [ 22 ] which is the address reserved for broadcast exchanges (and not the master address). Slaves must accept broadcast exchanges but must not respond. [ 23 ] The mapping of PDU of Modbus to the serial bus of Modbus over Serial Line protocol results in Modbus Serial Line PDU. [ 22 ]
Modbus Serial Line PDU = Address + PDU + CRC (or LRC)
With PDU = Function code + data
On the physical layer , MODBUS over Serial Line performs its communication on bit by RS485 or RS232 , with TIA/EIA-485 Two-Wire interface as the most popular way. RS485 Four-Wire interface is also used. TIA/EIA-232-E (RS232) can also be used but is limited to point-to-point short-range communication. [ 20 ] MODBUS over Serial Line has two transmission modes RTU and ASCII which are corresponded to two versions of the protocol, known as Modbus RTU and Modbus ASCII . [ 24 ]
Modbus RTU (Remote Terminal Unit), which is the most common implementation available for Modbus, makes use of a compact, binary representation of the data for protocol communication. The RTU format follows the commands/data with a cyclic redundancy check checksum as an error check mechanism to ensure the reliability of data. A Modbus RTU message must be transmitted continuously without inter-character hesitations. Modbus messages are framed (separated) by idle (silent) periods. Each byte (8 bits) of data is sent as 11 bits: [ 3 ] [ 24 ]
The default is even parity, while odd or no parity may be implemented as additional options. [ 24 ]
A Modbus RTU frame then will be: [ 25 ]
The CRC calculation is widely known as CRC-16-MODBUS, whose polynomial is x 16 + x 15 + x 2 + 1 (normal hexadecimal algebraic polynomial being 8005 and reversed A001 ). [ 26 ]
Example of a Modbus RTU frame in hexadecimal: 01 04 02 FF FF B8 80 (CRC-16-MODBUS calculation for the 5 bytes from 01 to FF gives 80B8 , which is transmitted least significant byte first).
To ensure frame integrity during the transmission, the time interval between two frames must be at least the transmission time of 3.5 characters, and the time interval between two consecutive characters must be no more than the transmission time of 1.5 characters. [ 25 ] For example, with the default data rate of 19200 bit/s, the transmission times of 3.5 (t3.5) and 1.5 (t1.5) 11-bit characters are:
t 3.5 = 3.5 ∗ ( 11 ∗ 1000 19200 ) = 2.005 m s {\displaystyle t3.5=3.5*\left({\frac {11*1000}{19200}}\right)=2.005ms}
t 1.5 = 1.5 ∗ ( 11 ∗ 10 6 19200 ) = 859.375 μ s {\displaystyle t1.5=1.5*\left({\frac {11*10^{6}}{19200}}\right)=859.375\mu s}
For higher data rates, Modbus RTU recommends to use the fixed values 750 μs for t1.5 and 1.750 ms for t3.5. [ 25 ]
Modbus ASCII makes use of ASCII characters for protocol communication. The ASCII format uses a longitudinal redundancy check checksum. Modbus ASCII messages are framed by a leading colon (":") and trailing newline (CR/LF).
A Modbus ASCII frame includes: [ 27 ]
Address, Function, Data, and LRC are ASCII hexadecimal encoded values, whereby 8-bit values (0–255) are encoded as two human-readable ASCII characters from the ranges 0–9 and A–F. For example, a value of 122 (7A 16 ) is encoded as two ASCII characters, "7" and "A", and transmitted as two bytes, 55 (37 16 , ASCII value for "7") and 65 (41 16 , ASCII value for "A").
LRC is calculated as the sum of 8-bit values (excluding the start and end characters), negated ( two's complement ) and encoded as an 8-bit value. For example, if Address, Function, and Data are 247, 3, 19, 137, 0, and 10, the two's complement of their sum (416) is −416; this trimmed to 8 bits is 96 (256 × 2 − 416 = 60 16 ), giving the following 17 ASCII character frame: :F7031389000A60␍␊ . LRC is specified for use only as a checksum: because it is calculated on the encoded data rather than the transmitted characters, its 'longitudinal' characteristic is not available for use with parity bits to locate single-bit errors.
Modbus TCP or Modbus TCP/IP is a Modbus variant used for communications over TCP/IP networks, connecting over port 502. [ 28 ] It does not require a checksum calculation, as lower layers already provide checksum protection.
Modbus TCP nomenclature is the same as for the Modbus over Serial line protocol, as any device which send out a Modbus command, is the 'client' and the response comes from a 'server'. [ 29 ]
The ADU for Modbus TCP is officially called Modbus TCP/IP ADU by the Modbus organization [ 30 ] and is also called Modbus TCP frame by other parties. [ 3 ]
MODBUS TCP/IP ADU = MBAP Header + Function code + Data
Where MBAP - which stands for MODBUS Application Protocol header - is the dedicated header used on TCP/IP to identify the MODBUS Application Data Unit.
The MBAP Header contains the following fields: [ 31 ]
Unit identifier is used with Modbus TCP devices that are composites of several Modbus devices, e.g. Modbus TCP to Modbus RTU gateways. In such a case, the unit identifier is the Server Address of the device behind the gateway.
A MODBUS TCP/IP ADU/Modbus TCP frame format then will be: [ 31 ] [ 30 ]
12 34 00 00 00 06 01 03 00 01 00 01
Besides the widely used Modbus RTU, Modbus ASCII and Modbus TCP, there are many variants of Modbus protocols:
Data models and function calls are identical for the first four variants listed above; only the encapsulation is different. However the variants are not interoperable, nor are the frame formats.
Another de facto protocol closely related to Modbus appeared later, and was defined by PLC maker April Automates, the result of a collaborative effort between French companies Renault Automation and Merlin Gerin et Cie in 1985: JBUS. Differences between Modbus and JBUS at that time (number of entities, server stations) are now irrelevant as this protocol almost disappeared with the April PLC series, which AEG Schneider Automation bought in 1994 and then made obsolete. However, the name JBUS has survived to some extent.
JBUS supports function codes 1, 2, 3, 4, 5, 6, 15, and 16 and thus all the entities described above, although numbering is different: | https://en.wikipedia.org/wiki/Modbus |
In the term mode coupling , as used in physics and electrical engineering, the word "mode" refers to eigenmodes of an idealized, "unperturbed", linear system . The superposition principle says that eigenmodes of linear systems are independent of each other: it is possible to excite or to annihilate a specific mode without influencing any other mode; there is no dissipation . In most real systems, however, there is at least some perturbation that causes energy transfer between different modes. This perturbation, interpreted as an interaction between the modes, is what is called "mode coupling".
Important applications are: | https://en.wikipedia.org/wiki/Mode_coupling |
In pharmacology and biochemistry , mode of action ( MoA ) describes a functional or anatomical change, resulting from the exposure of a living organism to a substance. [ 1 ] In comparison, a mechanism of action (MOA) describes such changes at the molecular level. [ 2 ] [ 1 ]
A mode of action is important in classifying chemicals, as it represents an intermediate level of complexity in between molecular mechanisms and physiological outcomes, especially when the exact molecular target has not yet been elucidated or is subject to debate. A mechanism of action of a chemical could be "binding to DNA" while its broader mode of action would be "transcriptional regulation". [ 3 ] However, there is no clear consensus and the term mode of action is also often used, especially in the study of pesticides , to describe molecular mechanisms such as action on specific nuclear receptors or enzymes . [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] Despite this, there are classification attempts, such as the HRAC's classification to manage pesticide resistance . [ 11 ]
This pharmacology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Mode_of_action |
A model is an informative representation of an object, person, or system. The term originally denoted the plans of a building in late 16th-century English, and derived via French and Italian ultimately from Latin modulus , ' a measure ' . [ 1 ]
Models can be divided into physical models (e.g. a ship model or a fashion model) and abstract models (e.g. a set of mathematical equations describing the workings of the atmosphere for the purpose of weather forecasting). Abstract or conceptual models are central to philosophy of science . [ 2 ] [ 3 ]
In scholarly research and applied science , a model should not be confused with a theory : while a model seeks only to represent reality with the purpose of better understanding or predicting the world, a theory is more ambitious in that it claims to be an explanation of reality. [ 4 ]
As a noun, model has specific meanings in certain fields, derived from its original meaning of "structural design or layout ":
A physical model (most commonly referred to simply as a model but in this context distinguished from a conceptual model ) is a smaller or larger physical representation of an object , person or system . The object being modelled may be small (e.g., an atom ) or large (e.g., the Solar System ) or life-size (e.g., a fashion model displaying clothes for similarly-built potential customers).
The geometry of the model and the object it represents are often similar in the sense that one is a rescaling of the other. However, in many cases the similarity is only approximate or even intentionally distorted. Sometimes the distortion is systematic, e.g., a fixed scale horizontally and a larger fixed scale vertically when modelling topography to enhance a region's mountains.
An architectural model permits visualization of internal relationships within the structure or external relationships of the structure to the environment. Another use is as a toy .
Instrumented physical models are an effective way of investigating fluid flows for engineering design. Physical models are often coupled with computational fluid dynamics models to optimize the design of equipment and processes. This includes external flow such as around buildings, vehicles, people, or hydraulic structures . Wind tunnel and water tunnel testing is often used for these design efforts. Instrumented physical models can also examine internal flows, for the design of ductwork systems, pollution control equipment, food processing machines, and mixing vessels. Transparent flow models are used in this case to observe the detailed flow phenomenon. These models are scaled in terms of both geometry and important forces, for example, using Froude number or Reynolds number scaling (see Similitude ). In the pre-computer era, the UK economy was modelled with the hydraulic model MONIAC , to predict for example the effect of tax rises on employment.
A conceptual model is a theoretical representation of a system, e.g. a set of mathematical equations attempting to describe the workings of the atmosphere for the purpose of weather forecasting. [ 8 ] It consists of concepts used to help understand or simulate a subject the model represents.
Abstract or conceptual models are central to philosophy of science , [ 2 ] [ 3 ] as almost every scientific theory effectively embeds some kind of model of the physical or human sphere . In some sense, a physical model "is always the reification of some conceptual model; the conceptual model is conceived ahead as the blueprint of the physical one", which is then constructed as conceived. [ 9 ] Thus, the term refers to models that are formed after a conceptualization or generalization process. [ 2 ] [ 3 ]
According to Herbert Stachowiak , a model is characterized by at least three properties: [ 10 ]
For example, a street map is a model of the actual streets in a city (mapping), showing the course of the streets while leaving out, say, traffic signs and road markings (reduction), made for pedestrians and vehicle drivers for the purpose of finding one's way in the city (pragmatism).
Additional properties have been proposed, like extension and distortion [ 12 ] as well as validity . [ 13 ] The American philosopher Michael Weisberg differentiates between concrete and mathematical models and proposes computer simulations (computational models) as their own class of models. [ 14 ]
According to Bruce Edmonds, there are at least 5 general uses for models: [ 15 ] | https://en.wikipedia.org/wiki/Model |
Model-Informed Precision Dosing (MIPD for short) is the use of pharmacometric models with computer software to optimize drug dosage for an individual patient. [ 1 ] Developed in the late 1960s under the impetus of clinical pharmacologists such as Lewis Sheiner and Roger Jelliffe, these approaches involve applying the equations and parameters describing a drug's pharmacokinetics and pharmacodynamics to define the best dosage regimen for a given individual, likely to produce circulating concentrations associated with maximum efficacy and minimum toxicity. Models typically take into account the patient's demographic characteristics (e.g., age, gender, ethnicity), clinical profile (e.g., body measurements, renal and hepatic function, comorbidities, co-medications, dietary habits, substances use) and possibly genetic factors (e.g., polymorphisms affecting cytochromes or drug transporters ). When starting a treatment, these models can be used to select a priori the optimal dosage for a patient, based on simulations. During the treatment course, these same models can be used to integrate the results of Therapeutic Drug Monitoring (i.e., the measurement and medical interpretation of circulating drug concentrations) or the measurement of biomarkers of efficacy or toxicity, in an a posteriori approach to dose optimization, derived from Bayesian inference and feedback loops . Practically, these approaches make extensive use of computer software dedicated to the clinical use of pharmacokinetic/pharmacodynamic models , belonging to the computerized clinical decision support tools . [ 2 ] [ 3 ] They complement Model-Informed Drug Development (MIDD), which is mainly carried out by pharmaceutical industry researchers prior to marketing.
Prescribers are expected to make increasingly regular use of model-driven precision dosing tools for patient treatment and follow-up. Dosage individualization represents the quantitative aspect of precision medicine , while the qualitative aspect lies in the personalized choice of the best drug to treat a given pathology. This optimization of dose selection is especially desirable for drugs with narrow therapeutic index (i.e. effective concentration close to toxic ones). It is also important when a treatment is to be applied to patients with peculiarities, such as children, frail elderly persons, polymorbid patients or those already heavily treated. Technical hurdles still limit the wide implementation of these approaches in clinical practice, but it is to be expected that electronic patient records will pursue their development, thus enabling the increasing integration of model-informed precision dosing into medical practice. [ 4 ] [ 5 ]
This pharmacology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Model-Informed_Precision_Dosing |
Model-based design ( MBD ) is a mathematical and visual method of addressing problems associated with designing complex control, [ 1 ] signal processing [ 2 ] and communication systems. It is used in many motion control , industrial equipment, aerospace , and automotive applications. [ 3 ] [ 4 ] Model-based design is a methodology applied in designing embedded software . [ 5 ] [ 6 ] [ 7 ]
Model-based design provides an efficient approach for establishing a common framework for communication throughout the design process while supporting the development cycle ( V-model ). In model-based design of control systems, development is manifested in these four steps:
The model-based design is significantly different from traditional design methodology. Rather than using complex structures and extensive software code, designers can use Model-based design to define plant models with advanced functional characteristics using continuous-time and discrete-time building blocks. These built models used with simulation tools can lead to rapid prototyping, software testing, and verification. Not only is the testing and verification process enhanced, but also, in some cases, hardware-in-the-loop simulation can be used with the new design paradigm to perform testing of dynamic effects on the system more quickly and much more efficiently than with traditional design methodology.
As early as the 1920s two aspects of engineering, control theory and control systems, converged to make large-scale integrated systems possible. In those early days controls systems were commonly used in the industrial environment. Large process facilities started using process controllers for regulating continuous variables such as temperature, pressure, and flow rate. Electrical relays built into ladder-like networks were one of the first discrete control devices to automate an entire manufacturing process.
Control systems gained momentum, primarily in the automotive and aerospace sectors. In the 1950s and 1960s, the push to space generated interest in embedded control systems. Engineers constructed control systems such as engine control units and flight simulators, that could be part of the end product. By the end of the twentieth century, embedded control systems were ubiquitous, as even major household consumer appliances such as washing machines and air conditioners contained complex and advanced control algorithms, making them much more "intelligent".
In 1969, the first computer-based controllers were introduced. These early programmable logic controllers (PLC) mimicked the operations of already available discrete control technologies that used the out-dated relay ladders. The advent of PC technology brought a drastic shift in the process and discrete control market. An off-the-shelf desktop loaded with adequate hardware and software can run an entire process unit, and execute complex and established PID algorithms or work as a Distributed Control System (DCS).
The main steps in model-based design approach are:
The disadvantages of model-based design are fairly well understood this late in development lifecycle of the product and development.
While Model-based design has the ability to simulate test scenarios and interpret simulations well, in real world production environments, it is often not suitable. Over reliance on a given toolchain can lead to significant rework and possibly compromise entire engineering approaches. While it's suitable for bench work, the choice to use this for a production system should be made very carefully.
Some of the advantages model-based design offers in comparison to the traditional approach are: [ 9 ]
Because of the limitations of graphical tools, design engineers previously relied heavily on text-based programming and mathematical models. However, developing these models was time-consuming, and highly prone to error. In addition, debugging text-based programs is a tedious process, requiring much trial and error before a final fault-free model could be created, especially since mathematical models undergo unseen changes during the translation through the various design stages.
Graphical modeling tools aim to improve these aspects of design. These tools provide a very generic and unified graphical modeling environment, and they reduce the complexity of model designs by breaking them into hierarchies of individual design blocks. Designers can thus achieve multiple levels of model fidelity by simply substituting one block element with another. Graphical models also help engineers to conceptualize the entire system and simplify the process of transporting the model from one stage to another in the design process. Boeing's simulator EASY5 was among the first modeling tools to be provided with a graphical user interface, together with AMESim , a multi-domain, multi-level platform based on the Bond Graph theory. This was soon followed by tool like 20-sim and Dymola , which allowed models to be composed of physical components like masses, springs, resistors, etc. These were later followed by many other modern tools such as Simulink and LabVIEW . | https://en.wikipedia.org/wiki/Model-based_design |
Model-based enterprise (MBE) is a term used in manufacturing, to describe a strategy where an annotated digital three-dimensional (3D) model of a product serves as the authoritative information source for all activities in that product's lifecycle . [ 1 ] [ 2 ]
A key advantage of MBE is that it replaces digital drawings. In MBE, a single 3D model contains all the information typically found on in an entire set of engineering drawings, including geometry, topology, dimensions, tolerances, materials, finishes, and weld call-outs. [ 3 ]
MBE was originally championed by the aerospace and defense industries, with the automotive industry following. [ 4 ] It has been adopted by many manufacturers around the world, in a wide range of industries. Significant benefits for manufacturers include reduced time to market and savings in production costs from improved tool design and fabrication, fewer overall assembly hours, less rework, streamlined development and better collaboration on engineering changes. [ 5 ]
There are two prerequisites to implementing MBE: The first is the creation of annotated 3D models, known as a Model-based definitions (MBD) . This requires the use of a CAD system capable of creating precise solid models , with product and manufacturing information (PMI) , a form of 3D annotation which may include dimensions, GD&T , notes, surface finish, and material specifications. (The mechanical CAD systems used in aerospace, defense, and automotive industries generally have these capabilities.) The second prerequisite is transforming MBDs into a form where they can be used in downstream lifecycle activities. As a rule, CAD models are stored in proprietary data formats, so they must be translated to a suitable MBD-compatible standard format, such as 3D PDF , [ 6 ] JT , STEP AP 242 , or ANSI QIF [ 7 ]
The core MBE tenet is that models are used to drive all aspects of the product lifecycle and that data is created once and reused by all downstream data consumers. Data reusability requires computer interpretability , where an MBD can be processed directly by downstream applications, and associativity of PMI to specific model features within the MBD. [ 1 ]
Historically, engineering and manufacturing activities have relied on hardcopy and/or digital documents (including 2D drawings ) to convey engineering data and drive manufacturing processes. These documents required interpretation by skilled practitioners, often leading to ambiguities and errors. [ 8 ] [ 9 ]
In the 1980s, improvements in 3D solid modeling made it possible for CAD systems to precisely represent the shape of most manufactured goods [ 10 ] —however, even enthusiastic adopters of solid modeling technology continued to rely upon 2D drawings (often CAD generated) as the authority (or master) product representation. 3D models, even if geometrically accurate, lacked a method to represent dimensions, tolerances, and other annotative information required to drive manufacturing processes.
In the early-to-middle 2000s, the ASME Y14.41-2003 Digital Product Data Definition Practices and ISO 16792:2006 Technical product documentation—Digital product definition data practices [ 11 ] standards were released, providing support for PMI annotations in 3D CAD models, and introducing the concept of MBD (or, alternatively, digital product definition) [ 12 ]
The model-based enterprise concept first appeared about 2005. Initially it was construed broadly, referring to the pervasive use of modeling and simulation technologies (of almost any type) throughout an enterprise. [ 13 ] In the late 2000s, An active community advocating development of MBE grew, based on the collaborative efforts of the Office of the Secretary of Defense , Army Research Laboratory , Armament Research Development Engineering Center (ARDEC) , Army ManTech, BAE Systems , NIST , and the NIST Manufacturing Extension Partnership (MEP). [ 14 ] The "MBE Team" included industry participants such as General Dynamics , Pratt & Whitney Rocketdyne , Elysium, Adobe , EOS, ITI TranscenData, Vistagy , PTC , Dassault Systèmes Delmia, Boeing , and BAE Systems . [ 15 ]
Over time, based on community feedback, MBE became more narrowly construed, referring to the use of MBD data to drive product lifecycle activities. [ 16 ] [ 1 ] In 2011, the MBE Team published these definitions:
By 2015, with improvements to ASME Y14.41 and ISO 16792, and the development of open CAD data exchange standards capable of adequately representing PMI, MBE started to become more widely adopted by manufacturers. | https://en.wikipedia.org/wiki/Model-based_enterprise |
In artificial intelligence , model-based reasoning refers to an inference method used in expert systems based on a model of the physical world. With this approach, the main focus of application development is developing the model. Then at run time, an "engine" combines this model knowledge with observed data to derive conclusions such as a diagnosis or a prediction.
A robot and dynamical systems as well are controlled by software. The software is implemented as a normal computer program which consists of if-then-statements, for-loops and subroutines. The task for the programmer is to find an algorithm which is able to control the robot, so that it can do a task. In the history of robotics and optimal control [ 1 ] there were many paradigm developed. One of them are expert systems , which is focused on restricted domains. [ 2 ] Expert systems are the precursor to model based systems.
The main reason why model-based reasoning is researched since the 1990s is to create different layers for modeling and control of a system. [ 3 ] This allows to solve more complex tasks and existing programs can be reused for different problems. The model layer is used to monitor a system and to evaluate if the actions are correct, while the control layer determines the actions and brings the system into a goal state. [ 4 ]
Typical techniques to implement a model are declarative programming languages like Prolog [ 5 ] and Golog. From a mathematical point of view, a declarative model has much in common with the situation calculus as a logical formalization for describing a system. [ 6 ] From a more practical perspective, a declarative model means, that the system is simulated with a game engine . A game engine takes a feature as input value and determines the output signal. Sometimes, a game engine is described as a prediction engine for simulating the world.
In 1990, criticism was formulated on model-based reasoning. Pioneers of Nouvelle AI have argued, that symbolic models are separated from underlying physical systems and they fail to control robots. [ 7 ] According to behavior-based robotics representative a reactive architecture can overcome the issue. Such a system doesn't need a symbolic model but the actions are connected direct to sensor signals which are grounded in reality.
In a model-based reasoning system knowledge can be represented using causal rules . For example, in a medical diagnosis system the knowledge base may contain the following rule:
In contrast in a diagnostic reasoning system knowledge would be represented through diagnostic rules such as:
There are many other forms of models that may be used. Models might be quantitative (for instance, based on mathematical equations) or qualitative (for instance, based on cause/effect models.) They may include representation of uncertainty. They might represent behavior over time. They might represent "normal" behavior, or might only represent abnormal behavior, as in the case of the examples above. Model types and usage for model-based reasoning are discussed in. [ 8 ]
This artificial intelligence -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Model-based_reasoning |
Model-based systems engineering ( MBSE ) represents a paradigm shift in systems engineering , replacing traditional document-centric approaches with a methodology that uses structured domain models as the primary means of information exchange and system representation throughout the engineering lifecycle. [ 1 ] [ 2 ]
Unlike document-based approaches where system specifications are scattered across numerous text documents, spreadsheets , and diagrams that can become inconsistent over time, MBSE centralizes information in interconnected models that automatically maintain relationships between system elements. These models serve as the authoritative source of truth for system design , enabling automated verification of requirements, real-time impact analysis of proposed changes, and generation of consistent documentation from a single source. This approach significantly reduces errors from manual synchronization, improves traceability between requirements and implementation, and facilitates earlier detection of design flaws through simulation and analysis.
The MBSE approach has been widely adopted across industries dealing with complex systems development, including aerospace, defense, rail, automotive, and manufacturing. [ 3 ] [ 4 ] [ 5 ] [ 6 ] By enabling consistent system representation across disciplines and development phases, MBSE helps organizations manage complexity, reduce development risks, improve quality, and enhance collaboration among multidisciplinary teams.
The International Council on Systems Engineering (INCOSE) defines MBSE as the formalized application of modeling to support system requirements, design, analysis, verification and validation activities beginning in the conceptual design phase and continuing throughout development and later life cycle phases. [ 1 ] [ 7 ]
The first known prominent public usage of the term "Model-Based Systems Engineering" is a book by A. Wayne Wymore with the same name. [ 8 ] The MBSE term was also commonly used among the SysML Partners consortium during the formative years of their Systems Modeling Language (SysML) open source specification project during 2003-2005, so they could distinguish SysML from its parent language UML v2 , where the latter was software-centric and associated with the term Model-Driven Development (MDD). The standardization of SysML in 2006 resulted in widespread modeling tool support for it and associated MBSE processes that emphasized SysML as their lingua franca .
In September 2007, the MBSE approach was further generalized and popularized when INCOSE introduced its "MBSE 2020 Vision", which was not restricted to SysML, and supported other competitive modeling language standards, such as AP233, HLA, and Modelica . [ 9 ] [ 10 ] According to the MBSE 2020 Vision: "MBSE is expected to replace the document-centric approach that has been practiced by systems engineers in the past and to influence the future practice of systems engineering by being fully integrated into the definition of systems engineering processes."
As of 2014, the scope of MBSE started to cover more Modeling and Simulation topics, in an attempt to bridge the gap between system model specifications and related system software simulations. As a consequence, the term "modeling and simulation-based systems engineering" has also been increasingly associated along with MBSE. [ 11 ]
According to the INCOSE SEBoK ( Systems Engineering Book of Knowledge ) MBSE may be considered a "subset of digital engineering ". [ 12 ] INCOSE hosts an annual meeting on MBSE, as well as MBSE working groups. [ 10 ] | https://en.wikipedia.org/wiki/Model-based_systems_engineering |
Model-driven engineering ( MDE ) is a software development methodology that focuses on creating and exploiting domain models , which are conceptual models of all the topics related to a specific problem. Hence, it highlights and aims at abstract representations of the knowledge and activities that govern a particular application domain , rather than the computing (i.e. algorithmic) concepts.
MDE is a subfield of a software design approach referred as round-trip engineering . The scope of the MDE is much wider than that of the Model-Driven Architecture . [ 1 ]
The MDE approach is meant to increase productivity by maximizing compatibility between systems (via reuse of standardized models), simplifying the process of design (via models of recurring design patterns in the application domain), and promoting communication between individuals and teams working on the system (via a standardization of the terminology and the best practices used in the application domain). For instance, in model-driven development, technical artifacts such as source code, documentation, tests, and more are generated algorithmically from a domain model. [ 2 ]
A modeling paradigm for MDE is considered effective if its models make sense from the point of view of a user that is familiar with the domain, and if they can serve as a basis for implementing systems. The models are developed through extensive communication among product managers, designers, developers and users of the application domain. As the models approach completion, they enable the development of software and systems.
Some of the better known MDE initiatives are:
The first tools to support MDE were the Computer-Aided Software Engineering ( CASE ) tools developed in the 1980s. Companies like Integrated Development Environments (IDE – StP), Higher Order Software (now Hamilton Technologies, Inc., HTI), Cadre Technologies, Bachman Information Systems , and Logic Works (BP-Win and ER-Win) were pioneers in the field.
The US government got involved in the modeling definitions creating the IDEF specifications. With several variations of the modeling definitions (see Booch , Rumbaugh , Jacobson , Gane and Sarson, Harel , Shlaer and Mellor , and others) they were eventually joined creating the Unified Modeling Language (UML). Rational Rose , a product for UML implementation, was done by Rational Corporation (Booch) responding automation yield higher levels of abstraction in software development. This abstraction promotes simpler models with a greater focus on problem space. Combined with executable semantics this elevates the total level of automation possible. The Object Management Group (OMG) has developed a set of standards called Model-Driven Architecture (MDA), building a foundation for this advanced architecture-focused approach.
According to Douglas C. Schmidt , model-driven engineering technologies offer a promising approach to address the inability of third-generation languages to alleviate the complexity of platforms and express domain concepts effectively. [ 4 ]
Notable software tools for model-driven engineering include: | https://en.wikipedia.org/wiki/Model-driven_engineering |
In software design , model-driven integration is a subset of model-driven architecture (MDA) which focuses purely on solving Application Integration problems using executable Unified Modeling Language (UML). | https://en.wikipedia.org/wiki/Model-driven_integration |
Model-driven security (MDS) means applying model-driven approaches (and especially the concepts behind model-driven software development ) [ 1 ] to security .
The general concept of Model-driven security in its earliest forms has been around since the late 1990s (mostly in university research [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] ), and was first commercialized around 2002. [ 11 ] There is also a body of later scientific research in this area, [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] which continues to this day.
A more specific definition of Model-driven security specifically applies model-driven approaches to automatically generate technical security implementations from security requirements models. In particular, "Model driven security (MDS) is the tool supported process of modelling security requirements at a high level of abstraction, and using other information sources available about the system (produced by other stakeholders). These inputs, which are expressed in Domain Specific Languages (DSL), are then transformed into enforceable security rules with as little human intervention as possible. MDS explicitly also includes the run-time security management (e.g. entitlements/authorisations), i.e. run-time enforcement of the policy on the protected IT systems, dynamic policy updates and the monitoring of policy violations." [ 18 ]
Model-driven security is also well-suited for automated auditing, reporting, documenting, and analysis (e.g. for compliance and accreditation), because the relationships between models and technical security implementations are traceably defined through the model-transformations. [ 19 ]
Several industry analyst sources [ 20 ] [ 21 ] [ 22 ] state that MDS "will have a significant impact as information security infrastructure is required to become increasingly real-time, automated and adaptive to changes in the organisation and its environment". Many information technology architectures today are built to support adaptive changes (e.g. Service Oriented Architectures (SOA) and so-called Platform-as-a-Service "mashups" in cloud computing [ 23 ] ), and information security infrastructure will need to support that adaptivity ("agility"). The term DevOpsSec (see DevOps ) is used by some analysts [ 24 ] equivalent to model-driven security.
Because MDS automates the generation and re-generation of technical security enforcement from generic models, it: [ 25 ] [ 18 ]
Apart from academic proof-of-concept developments, the only commercially available full implementations of model-driven security (for authorization management policy automation) include ObjectSecurity OpenPMF, [ 11 ] which earned a listing in Gartner's "Cool Vendor" report in 2008 [ 26 ] and has been advocated by a number of organizations (e.g. U.S. Navy [ 27 ] ) as a means to make authorization policy management easier and more automated. | https://en.wikipedia.org/wiki/Model-driven_security |
The Model 109 , or Number 109 , was a series of mainframe computers designed and built in the People's Republic of China , starting in 1964 . [ 1 ]
Those were followed by the Number 111 , their first integrated circuit computer, in 1971 . [ 1 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Model_109 |
The Model K was an early 2- bit binary adder built in 1937 by Bell Labs scientist George Stibitz as a proof of concept , using scrap relays and metal strips from a tin can. The "K" in "Model K" came from " k itchen table", upon which he assembled it. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
This technology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Model_K_(calculator) |
A model airport is a scale model of an airport.
While airport models have been around, in a way, since airfields were open to the public, early model airports were basically restricted to public showcases about the airport and its surroundings to the public; these were usually located inside the airport themselves. [ citation needed ]
Since Herpa Wings 's introduction of their airport set series to their line of airline related toys, there has been an increase of aircraft modelers who have made mock airports to showcase their private collection of model aircraft. Often, the collector will model their airport after a real-life airport. [ citation needed ]
Model airports can be made to look very realistic, with many real airport features such as terminals, control towers, cargo terminals, hangars, passenger bridges and more. Companies such as Gemini Jets , Herpa Wings , and JC Wings have produced ground support equipment in various scales.
Collectors who make model airports may use die-cast models for their creation. Among the brands of die cast aircraft models most commonly used on these airports are Aeroclassics , Herpa Wings , Dragon Models Limited , Gemini Jets , Phoenix Models , JC Wings , and NG Model . [ citation needed ]
In 2011, what may be the world's largest model airport opened for public view at Miniatur Wunderland in Hamburg, Germany . The model, named Knuffingen International Airport, is based on Hamburg International Airport . [ 1 ] Another popular park in Europe, Madurodam in the Netherlands includes a model airport [ 2 ] featuring models of several airlines such as KLM , Emirates , Lufthansa , EVA Air , Turkish Airlines , UPS Airlines , Transavia , Thai Airways , Korean Air , Delta Air Lines , and an A380 of Singapore Airlines , alongside a DHL -branded Airbus A300 . [ 3 ] The Madurodam airport is based on Amsterdam's Schiphol Airport .
In addition, the TWA Hotel in New York features a model airport that demonstrates the way the TWA airline's operation center used to look like at New York's John F. Kennedy Airport in the 1960s and 1970s. This model airport was made using TWA model aircraft and runways and buildings in the 1:400 scale. [ 4 ]
This toy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Model_airport |
Model building is a hobby and career [ 1 ] [ 2 ] that involves the creation of physical models either from kits or from materials and components acquired by the builder. The kits contain several pieces that need to be assembled in order to make a final model. Most model-building categories have a range of common scales that make them manageable for the average person both to complete and display. [ 3 ] A model is generally considered physical representations of an object and maintains accurate relationships between all of its aspects. [ 4 ]
The model building kits can be classified according to skill levels that represent the degree of difficulty for the hobbyist. These include skill level 1 with snap-together pieces that do not require glue or paint; skill level 2, which requires glue and paint; and, skill level 3 kits that include smaller and more detailed parts. [ 3 ] Advanced skill levels 4 and 5 kits ship with components that have extra-fine details. Particularly, level 5 requires expert-level skills.
Model building is not exclusively a hobbyist pursuit. The complexity of assembling representations of actual objects has become a career for several people, and is heavily applicable in film making. [ citation needed ] There are, for instance, those who build models/props to commemorate historic events, employed to construct models using past events as a basis to predict future events of high commercial interest. [ 5 ] [ 1 ] [ 2 ]
This decorative art –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Model_building |
In model theory , a first-order theory is called model complete if every embedding of its models is an elementary embedding .
Equivalently, every first-order formula is equivalent to a universal formula.
This notion was introduced by Abraham Robinson .
A companion of a theory T is a theory T * such that every model of T can be embedded in a model of T * and vice versa.
A model companion of a theory T is a companion of T that is model complete. Robinson proved that a theory has at most one model companion. Not every theory is model-companionable, e.g. theory of groups. However if T is an ℵ 0 {\displaystyle \aleph _{0}} - categorical theory , then it always has a model companion. [ 1 ] [ 2 ]
A model completion for a theory T is a model companion T * such that for any model M of T , the theory of T * together with the diagram of M is complete . Roughly speaking, this means every model of T is embeddable in a model of T * in a unique way.
If T * is a model companion of T then the following conditions are equivalent: [ 3 ]
If T also has universal axiomatization, both of the above are also equivalent to:
If T is a model complete theory and there is a model of T that embeds into any model of T , then T is complete. [ 4 ] | https://en.wikipedia.org/wiki/Model_complete_theory |
Model elimination is the name attached to a pair of proof procedures invented by Donald W. Loveland , the first of which was published in 1968 in the Journal of the ACM . Their primary purpose is to carry out automated theorem proving , though they can readily be extended to logic programming , including the more general disjunctive logic programming .
Model elimination is closely related to resolution while also bearing characteristics of a tableaux method. It is a progenitor of the SLD resolution procedure used in the Prolog logic programming language.
While somewhat eclipsed by attention to, and progress in, resolution theorem provers, model elimination has continued to attract the attention of researchers and software developers. Today there are several theorem provers under active development that are based on the model elimination procedure. | https://en.wikipedia.org/wiki/Model_elimination |
A model engine is a small internal combustion engine [ 1 ] typically used to power a radio-controlled aircraft , radio-controlled car , radio-controlled boat , free flight , control line aircraft, or ground-running tether car model.
Because of the square–cube law , the behaviour of many engines does not always scale up or down at the same rate as the machine's size; usually at best causing a dramatic loss of power or efficiency, and at worst causing them not to work at all. Methanol and nitromethane are common fuels.
The fully functional, albeit small, engines vary from the most common single-cylinder two-stroke to the exotic single and multiple-cylinder four-stroke , the latter taking shape in boxer , v-twin , inline and radial form, a few Wankel engine designs are also used. Most model engines run on a blend of methanol , nitromethane , and lubricant (either castor or synthetic oil ).
Two-stroke model engines, most often designed since 1970 with Schnuerle porting for best performance, range in typical size from .12 cubic inches (2 cubic centimeters) to 1.2 ci (19.6 cc) and generate between .5 horsepower (370 watts ) to 5 hp (3.7 kW), can get as small as .010 ci (.16 cc) and as large as 3-4 ci (49–66 cc). [ 2 ] Four-stroke model engines have been made in sizes as small as 0.20 in3 (3.3 cc) for the smallest single-cylinder models, all the way up to 3.05 in3 (50 cc) for the largest size for single-cylinder units, with twin- and multi-cylinder engines on the market being as small as 10 cc for opposed-cylinder twins, while going somewhat larger in size than 50 cc, and even upwards to well above 200 cc for some model boxer opposed-piston, inline and radial engines . While the methanol and nitromethane blended " glow fuel " engines are the most common, many larger (especially above 15 cc/0.90 ci displacement) model engines, both two-stroke and a growing number of four-stroke examples are spark ignition, and are primarily fueled with gasoline — with some examples of both two and four-stroke glow plug-designed methanol aeromodeling engines capable, with aftermarket upgrades, to having battery-powered, electronically controlled spark ignition systems replacing the glow plugs normally used. Model engines refitted in such a manner often run more efficiently on methanol-based glow plug engine fuels, often with the ability to exclude the use of nitromethane altogether in their fuel formulas.
This article concerns itself with the methanol engines; gasoline-powered model engines are similar to those built for use in string trimmers , chainsaws , and other yard equipment, unless they happen to be purpose-built for aeromodeling use, being especially true for four-stroke gasoline-fueled model engines. Such engines usually use a fuel that contains a small percentage of motor oil as a two-stroke engine uses for lubrication purposes, as most model four-stroke engines — be they glow plug or spark ignition — have no built-in reservoir for motor oil in their crankcase or engine block design.
The majority of model engines have used, and continue to use, the two-stroke cycle principle to avoid needing valves in the combustion chamber , but a growing number of model engines use the four-stroke cycle design instead. Both reed valve and rotary valve -type two-strokes are common, with four-stroke model engines using either conventional poppet valve , and rotary valve formats for induction and exhaust.
The engine shown to the right has its carburetor in the center of the zinc alloy casting to the left. (It uses a flow restriction, like the choke on an old car engine, because the venturi effect is not effective on such a small scale.) The valve reed, cross shaped above its retainer spring, is still beryllium copper alloy, in this old engine. The glow plug is built into the cylinder head. Large production volume makes it possible to use a machined cylinder and an extruded crank case (cut away by hand in the example shown). These Cox Bee reed valve engines are notable for their low cost and ability to survive crashes. The components of the engine shown come from several different engines.
Images of a glowplug engine and a "diesel" engine are shown below for comparison. The most obvious external difference is seen on top of the cylinder head. The glowplug engine's glow plug has a pinlike terminal for its center contact, which is an electrical connector for the glowplug. The "diesel" engine has a T-bar which is used for adjusting the compression. The cylindrical object behind the glowplug engine is an exhaust silencer or muffler .
Glow plugs are used for starting as well as continuing the power cycle. The glow plug consists of a
durable, mostly platinum , helically wound wire filament , within a cylindrical pocket in the plug body, exposed to the combustion chamber . A small direct current voltage (around 1.5 volts) is applied to the glow plug, the engine is then started, and the voltage is removed. The burning of the fuel/air mixture in a glow-plug model engine, which requires methanol for the glow plug to work in the first place, and sometimes with the use of nitromethane for greater power output and steadier idle, occurs due to the catalytic reaction of the methanol vapor to the presence of the platinum in the filament, thus causing the ignition. This keeps the plug's filament glowing hot, and allows it to ignite the next charge.
Since the ignition timing is not controlled electrically, as in a spark ignition engine or by fuel injection , as in an ordinary diesel , it must be adjusted by the richness of the mixture, the ratio of nitromethane to methanol, the compression ratio , the cooling of the cylinder head , the type of glow plug, etc. A richer mixture will tend to cool the filament and so retard ignition, slowing the engine, and a rich mixture also eases starting. After starting the engine can easily be leaned (by adjusting a needle valve in the spraybar) to obtain maximum power. Glowplug engines are also known as nitro engines. Nitro engines require a 1.5 volt ignitor to light the glow plug in the heat sink . Once primed, pulling the starter with the ignitor in will start the engine.
Diesel engines are an alternative to methanol glow plug engines. These "diesels" run on a mixture of kerosene , ether , castor oil or vegetable oil , and cetane or amyl nitrate booster. Despite their name, their use of compression ignition , and the use of a kerosene fuel that is similar to diesel , model diesels share very little with full-size diesel engines .
Full-size diesel engines, such as those found in a truck , are fuel injected and either two-stroke or four-stroke . They use compression ignition to ignite the mixture: the compression within the cylinder heats the inlet charge sufficiently to cause ignition, without requiring an applied ignition source. A fundamental feature of such engines, unlike petrol (gasoline) engines, is that they draw in air alone and the fuel is only mixed by being injected into the combustion chamber separately. Model diesel engines are instead a carbureted two-stroke using the crankcase for compression. The carburetor supplies a mixture of fuel and air into the engine, with the proportions kept fairly constant and their total volume throttled to control the engine power.
Apart from sharing the diesel's use of compression ignition, their construction has more in common with a small two-stroke motorcycle or lawnmower engine. In addition to this, model diesels have variable compression ratios . This variable compression is achieved by a "contra-piston", at the top of the cylinder, which can be adjusted by a screwed "T-bar". The swept volume of the engine remains the same, but as the volume of the combustion chamber at top dead centre is changed by adjusting the contra-piston, the compression ratio (swept volume + combustion chamber / combustion chamber) changes accordingly.
Model diesels are found to produce more torque than glow engines of the same displacement , and are thought to get better fuel efficiency , because the same power is produced at a lower rpm , and in a smaller displacement engine. However, the specific power may not be significantly superior to a glow engine, due to the heavier construction needed to assure that the engine can withstand the much higher compression ratio , sometimes reaching 30:1. Diesels also run significantly quieter, due to the more rapid combustion, unlike two-stroke glow engines, in which combustion may still be occurring when the exhaust ports are uncovered, causing a significant amount of noise.
Recent developments in model engineering have produced true diesel model engines, with a traditional injector and injector pump, and these engines operate in the same way as a large diesel engine. | https://en.wikipedia.org/wiki/Model_engine |
Model engineering is the pursuit of constructing proportionally-scaled miniature working representations of full-sized machines. It is a branch of metalworking with a strong emphasis on artisanry, as opposed to mass production . While now mainly a hobby , in the past it also had commercial and industrial purpose. The term 'model engineering' was in use by 1888. [ 1 ] In the United States , the term 'home shop machinist' is often used instead, although arguably the scope of this term is broader.
Model engineering is most popular in the industrialised countries that have an engineering heritage extending back to the days of steam power. That is, it is a pursuit principally found in the UK, USA, northwestern European countries and the industrialised British Commonwealth countries.
The 'classic' areas of model engineering interest are live steam models (typically steam locomotives , stationary engines , marine steam engines , Showman's engines , and traction engines ), [ 2 ] internal combustion engines , [ 3 ] and clock making . [ 4 ] Other popular subjects are Stirling engines , [ 5 ] workshop equipment, miniature machine tools and ornamental turning . These constitute stable genres which are often reflected in competition categories at model engineering exhibitions. In the past, amateur electrical experimentation (the precursor to hobby electronics) and ship modelling were considered as part of model engineering, but these are no longer regarded as core genres.
Model engineers typically make models by machining working parts from stock metal and metal castings. Some models are intended as utilitarian working models, others as highly meticulous display models, or sometimes a combination of both. The most elaborate models involve hand manufacture of thousands of parts, taking thousands of hours to complete, often over a number of years or even decades. The model engineer is often guided by commercially available drawings, however some draw their own designs, or even work without drawings. Similarly, most model engineers will buy castings when required, but some buy or make foundry equipment to cast metal themselves.
Increasingly, 'modern' technologies such as Computer aided design , cnc (computer numerical control) equipment, laser cutting , 3D printing and embedded systems are becoming part of the hobby as more and more of its practitioners have developed skills and familiarity with these techniques through their work, whilst younger people familiar with newer processes discover the potential of traditional machining, narrowing the gap between 'model engineering' and ' maker culture '.
As an activity that involves extensive use of metalwork machine tools in a home workshop-based context, model engineering overlaps with other artisanal machine-tool based and allied metalwork pursuits including gunsmithing (particularly in the USA), manufacture of home metalworking tools and accessories, home cnc equipment building, antique machine tool collecting, antique vehicle and machine restoration, home welding and hobby metalcasting. Model engineering is closely associated with the hobby of running live steam locomotives, and overlaps to a degree with the making of non-working models, particularly those of mechanical subjects. Products such as Meccano and low-pressure toy steam engines have a loose affinity with model engineering, stemming from the production of scientific and mechanical toys beginning in the late 18th century. [ 6 ] Steam Punk , a post-industrial sculptural art style picking up on the aesthetic and kinetic qualities of old machinery, shares some overlap. [ citation needed ]
There is some debate about the appropriateness of the term 'model engineering'. Some argue that the term ' engineer ' should be reserved solely for those professionally qualified as such. However, the historic meaning of 'engineer' is one who constructs or tends engines, and as such is a fitting epithet for those who make working models as a hobby. In any case, since the term 'model engineer' was employed by 1888, the precedent for its use has long been a fait accompli .
Model live steam locomotives predominate as the most popular modelling subject in model engineering. [ 7 ] As such they deserve special mention. Live steam refers to the use of pressurised steam, heated in the model locomotive's own boiler , to turn the wheels via miniature steam cylinders . Not all locomotives are live steam - some model engineers make model locomotives powered by electricity or internal combustion engines. The criteria however is that the model is self-propelled, hence requiring an engine to be made or motor to be installed, as opposed to the (usually much smaller) model trains that rely on an electrified track to run.
Live steam (and other self-propelled) locomotives are made in a range of sizes, or scales, according to track gauge. The smaller gauges, sometimes called 'garden gauges' because they can be set up in the owner's own garden, [ 8 ] or in the US called a backyard railroad , are sufficient to run by themselves but usually cannot haul the driver or passengers. The larger gauges are usually found on club tracks or miniature railways , and are intended to haul the driver and passengers.
Popular 'garden gauges' are '0' gauge, '1' gauge and 2½" gauge (ridable). Usual club track gauges are 3½", 5" and 7¼", and 4¾" and 7½" in North America. Larger miniature railway gauges such as 10¼" and 15" gauge are more common in zoo and park settings or as public passenger-hauling lines such as the Romney, Hythe & Dymchurch Railway . Various gauges have existed over time. 3½" and 5" gauge were proposed in 1898 as standard model gauges, although 5" gauge only became popular after the Second World War, along with 7¼" gauge. [ 9 ]
Not all model live steam locomotive enthusiasts are model engineers (and vice versa ). There are many live steam enthusiasts who prefer running the models on a track rather than spending long hours building them in a workshop, and so purchase a ready made model locomotive. However, for many the joy of the hobby lies in the manufacturing process, ending in the great satisfaction of a running engine of any sort, which can be immense.
The aim of model engineering to build mechanical models is now usually purely recreational, although beginning with the Industrial Revolution in the late 18th century through to the late 20th century such models were widely produced as aids to technical education , either as apprentice projects or as classroom or public institutional exhibits. They were also produced as commercial props to support a patent, to visualise a proposed capital venture, or to advertise a manufacturer's trade. Many museums in the old industrialised countries house original collections of mechanical models stemming from the earliest days of the industrial revolution. One of the earliest known models of a steam engine, that of a Newcomen beam engine, was made prior to 1763. [ 10 ] The Science Museum, London published catalogues illustrating many early models. [ 11 ] [ 12 ] Many of these models represent the same subjects that remain popular with model engineers today, which attests to the long tradition of model engineering.
The earliest publication to offer instruction to the public on building working steam engine models was the Model Dockyard Handy-book (2nd edition 1867) by E. Bell, proprietor of the Model Dockyard shop in London, which also offered the parts and completed models for sale. Bell was, he said, "Ship Modeller and Mechanist" to the Royal Family, the English Admiralty and various European royalty. [ 13 ] The book was aimed at building and operating these models as a recreational pursuit. In Britain, the establishment of a broad middle class by the late nineteenth century, an associated widening of leisure pursuits, and the rise of the Arts and Crafts movement that valorized handicrafts , saw a new constituency of amateur model engineers and experimenters interested in metalwork as a recreation. This was at a time when mechanical technology was seen as the driving force in rising economic prosperity . Articles and advertisements relating to model engineering began to appear in Amateur Work Illustrated magazine in the mid-1880s. [ 14 ] [ 15 ] With the rise of 'amateur' interest in conjunction with the working class mechanics who made models as apprentices, a new market niche was emerging, capitalised upon by Percival Marshall who began publishing Model Engineer and Amateur Electrician magazine in 1898 (now Model Engineer ). [ 16 ] Common interest in model engineering between men of lower, middle and even upper classes supported claims that model engineering had broken class barriers. However, a tradition that still persists is the use of pseudonyms in the model engineering press, as it was once considered inappropriate for professional gentlemen to contribute to "amateur" journals. Another reason was to disguise the fact that one contributor was single-handedly writing an entire edition of a journal on occasion, notably Edgar T. Westbury , who used no less than four noms de plume .
Model engineering remains popular despite major social changes over the past century. Among these changes have been the elimination of steam power (still the most favourite subject for model engineers) from rail transport and industry; and the widespread de-industrialisation of Western countries beginning in the 1970s, along with a shift to consumer society and the introduction of a wide new range of competing leisure pursuits. These changes, along with the older age of many model engineers [ 17 ] and decline of new apprenticeships, have prompted a long-running debate among model engineers whether the hobby will die out. [ 18 ] [ 19 ]
Model engineers often join to form model engineering clubs and societies. [ 20 ] The first of these to form was the Society of Model and Experimental Engineers based in London, UK, in 1898, "along similar lines to the model yachting clubs" then popular. [ 21 ] [ 22 ] By 1948, "well over a hundred local clubs and societies" had been formed. [ 23 ] Model engineering clubs and societies now number in the hundreds across the UK, Canada, Australia, South Africa, New Zealand, Netherlands, Switzerland and elsewhere. [ 24 ] These clubs are a form of civil society organisation, which are a sign of healthy democracy and community cohesion .
A major focus for many of these clubs is the operation of a club track or miniature railway for members' model live steam locomotives. These tracks are often run publicly and form part of community recreational and tourism infrastructure in their local area. Model engineering clubs and societies often cater too for model engineering interests beyond locomotives. Due to the inherently dangerous nature of live steam, clubs and societies are responsible for administering safety regulations, insurance and specialist boiler codes that cover both members and the public. To this end, model engineering clubs and societies often affiliate into national bodies that can lobby government to maintain the historical privilege they have to self-regulate their own safety standards.
Livelihoods based on model engineering include retailers who provide model engineers with equipment and supplies, small fabrication services who produce castings, make miniature live steam boilers and live steam kit parts (or even whole running models), commercial publishers in the model engineering press, and a very few professional model engineers who make one-off models by commission for private or institutional collectors. Most model engineers however are amateur constructors who rely on other income.
Each year, many local and regional model engineering shows and exhibitions are held wherever clubs are found, which recognize the best work of model engineers. The largest exhibitions are held in London, Harrogate and Bristol in the UK. In the UK, the Duke of Edinburgh Challenge Trophy, awarded annually at the Model Engineer Exhibition, reflects some of the best of the hobby. Pre-eminent among the Trophy's recipients is nine-time winner Cherry Hill. [ 25 ] On the web, the quality of some modern model engineers' work is celebrated at The Internet Craftsmanship Museum.
Many of the best-known names in model engineering are of those who wrote prolifically in the model engineering press. Henry Greenly may be the first notable model engineer, being founding editor of Model Railways and Locomotives Magazine in 1902 and author of Model Engineering and related books. [ 26 ] [ 27 ] Greenly produced a number of designs for spirit-fuelled model locomotives, which however could not haul passengers. Arguably the most notable model engineer of all was the obscure 'LBSC' (Lillian "Curly" Lawrence) . His most significant contribution was to overturn Greenly's prevailing orthodoxy and demonstrate that model locomotives of even small gauge (2½") could be powerful enough to haul passengers, by using miniature coal-fired firetube boilers , as were used in full-size locomotives. From 1923 until his death in 1967 he popularized passenger-hauling miniature live steam locomotives that could be built with minimal equipment, by publishing over 50 locomotive designs in various gauges, serialized mostly in Model Engineer magazine. [ 28 ]
Many other model engineers have contributed numerous designs notable for their enduring popularity. Prior to the appearance of Engineering in Miniature magazine in 1979 and Model Engineers' Workshop in 1990, these authors wrote almost exclusively in Model Engineer . Among these, Edgar T. Westbury produced many internal combustion engine designs, W.J. Hughes designed many agricultural and traction engine models. Colonel C. E. Bowden will be remembered as one of the most prolific experimenters with model aircraft, model boats and radio control, particularly his successes in powered model flight co-operating with E. T. Westbury, who made the Atom Minor engine that powered several of his early models. Martin Evans produced a great many more model locomotive designs, George H. Thomas specialised in designs for workshop accessories, Tubal Cain (T. D. Walshaw) developed a number of stationary engine designs, and Claude B. Reeve [ 29 ] produced many clock designs. Ian Bradley and Norman Hallows wrote individually and together under the pen-name of Duplex on a wide range of topics, notably finely finished and ingenious tooling. Don Young contributed locomotive designs to Model Engineer and then published his own quarterly live steam magazine Locomotives Large and Small from 1979 until his death in 1994. [ 30 ] More recently, Kozo Hiraoka has authored several series of logging locomotive articles in the U.S. magazine Live Steam . Cherry Hill was notable for her small-scale working models of unusual early steam vehicles. [ 31 ] [ 32 ]
Machine tools used for model engineering include the lathe , the mill , the shaper , and the drill press . [ 33 ] Until the introduction from Asia of relatively cheap machinery, beginning in the 1980s, UK or US made machine tools produced by Myford , South bend , Bridgeport and other now-defunct Western companies were fairly ubiquitous in model engineering. [ 34 ] These days model engineers have a choice of new budget-made Asian machinery, the restoration of 'old iron' (used machinery made to high standards in the former Western industrial centres), or, if money is not an issue, new high-end machinery from the few remaining Western manufacturers. These machines only become truly useful once the model engineer accumulates a large set of associated tooling (such as drills, reamers, collets, etc.) that, all up, may cost more than the larger items of machinery. Model engineers often economise by making items of tooling themselves.
Although traditionally a manual hobby, that is, one that relies on the model engineer hand-making the parts with the assistance of manually operated machinery, computerised tools are becoming popular with some model engineers. Designs are now often produced with the aid of CAD software. Some model engineers use 3D CAD software to build the model in virtual space before commencing on the physical model. [ 35 ] Such CAD software also interfaces with CNC machinery, particularly milling machines, of which an increasing range is now aimed at model engineers and other 'home shop machinists', making it possible for some model components to be manufactured under computer control. [ 36 ] 3D printing is another technology for model engineers to explore. Model engineering continues to expand by incorporating new technologies including laser cutting , 3D printing and embedded electronics establishing more and more common ground with the maker movement, these computerised pursuits are a sub-branch of model engineering and are not followed by the majority.
Kits of parts offer a shortcut to the traditional method of building. Kits fall into two categories, machined and unmachined kits. Unmachined kits usually consist of drawings, castings, stock metal, and possibly fasteners and other fixings necessary to complete the model. They require machining facilities to complete and often also require additional components and raw materials. Typically the minimum machine requirements are a lathe, drilling machine, and possibly a milling machine. A good level of knowledge about machining is necessary to successfully complete these kits. Machined kits are a set of parts that are fully machined and only require finishing with hand tools, painting, and assembly. Workshop machinery is not required. The kit will typically contain all the parts necessary to complete the model. These kits require a lot less work than an unmachined kit, but are very expensive and choice of subject matter is limited.
There are many books, magazines and internet forums about model engineering. Magazines such as Model Engineer and Live Steam remain the main source of detailed designs and plans (in addition to carrying news items and discussion of products and techniques). These detailed designs and plans contain instructions and drawings sufficient to build a particular model. In the pages of magazine back-issues, hundreds of such designs exist for all sorts of models. Many of the plans are also reprinted by plans services and model engineering suppliers. Books tend to discuss techniques (sometimes with detailed designs and plans), and forums are active in problem-solving and general discussion. There are also many model engineers' websites or blogs that feature their owners' current and past projects. Some model engineering webzines also exist.
Annual model engineering shows and exhibitions are held around the world, organized either by local and regional clubs or professional exhibition firms. The largest exhibitions are held in London, Doncaster (previously at Harrogate), and Bristol in the UK; York, Pennsylvania in the US; and Karlsruhe in Germany. The [Miniature Engineering Craftsmanship Museum in Carlsbad, California (USA) has a permanent collection of exhibits. | https://en.wikipedia.org/wiki/Model_engineering |
A model figure is a scale model representing a human, monster or other creature. Human figures may be either a generic figure of a type (such as " World War II Luftwaffe pilot "), a historical personage (such as " King Henry VIII "), or a fictional character (such as " Conan ").
Model figures are sold both as kits for enthusiast to construct and paint and as pre-built, pre-painted collectable figurines . Model kits may be made in plastic (usually polystyrene ), polyurethane resin , or metal (including white metal ); collectables are usually made of plastic, porcelain , or (rarely) bronze .
There are larger size (12-inch or 30 cm tall) that have been produced for recent movie characters ( Princess Leia from Star Wars , for example). Large plastic military figures are made by some model soldier manufacturers as a sideline.
Enthusiasts may pursue figure modeling in its own right or as an adjunct to military modeling .
There is also overlap with miniature figures (minis) used in wargames and role-playing games : minis are usually less than 54 mm scale , and do not necessarily represent any given personage.
Back in the early '80s and '90s military modeling figures were largely produced in 1:72 and 1:35 scales with other scales such as 1:48 and 1:32 holding a smaller market share. Typically 1:48 scale was reserved for aircraft and aircraft support vehicles with figures being maintenance and flight crews while 1:32 scale miniatures were composed largely of vehicles such as tanks and their crews.
1:35 scale miniatures were produced by many companies such as Tamiya , Testor's , Revell , Monogram and others. Kits of soldiers, vehicles and combinations covered World War I through Vietnam with the largest portion centering on World War II. 1/72 scale miniatures covered a much wider and diverse range of time periods with Atlantic offering figures of Ancient Egyptians, Greeks, Romans, Cowboys, American Indians and many more. Other company's such as Airfix supplied not only high-quality figures in 1:72 scale but also fine planes and military vehicles and still do so today. One of the largest distinctions between 1:72 scale and 1:35 scale aside from the obvious size was the amount of ready-to-paint dioramas and sets available to small-scale modelers. Airfix, a leader in the small-scale model market offered several kits for modelers from pontoon bridges, the Atlantic wall , Waterloo , and many others. These kits came with everything a hobbyist would need to portray a given moment from buildings and trees to vehicles and men. None of these were available to the larger scale modeler.
Tamiya, a higher-end supplier of military vehicle and soldier kits, has, in the past few years, taken 1:48 scale modeling a step further offering an interesting line of German and American World War II figures and vehicles making it possible to incorporate tanks, jeeps, and foot soldiers into dioramas with aircraft, something which was only possible in 1:72 scale for quite a long time. For the serious military modeler this opens a new realm of possibilities in diorama making.
The same growth in availability is true for 1:32 scale as well. For quite a while 1:32 scale figures were more or less better versions of the army men children play with. Kits came as single-cast figures molded as a unit instead of the ready-to-assemble versions found at 1:48 and 1:35 scale where arms, helmets and gear must be cut from plastic sprues and glued together. 1:32 scale soldiers were often slightly lower quality than their 1:35 scale counterparts as they were molded from a softer plastic allowing things like rifle barrels to bend while the soldiers sat in the boxes. 1:32 scale kits were limited and this made extensive modeling difficult. Lately, 1:32 scale modeling has made a large push to expand as companies now sell these figures professionally pre-painted making them exceptional for large-scale military gaming of all sorts. In fact, the diorama industry has started supplying pre-painted diorama scenery as well making high-quality 1:32 scale diorama making much easier than ever before.
Figure model kits can be as large as 1:16 scale. These kits include motorized vehicles and stand alone figures. Kits of this size take a great deal of effort and time to paint as lengths must be taken to get the details of the paint job precise whereas with smaller kits, while details is still essential, there is less to be done.
Many model figures used for gaming are measured in millimeters ranging from 15 to 80 mm with miniature wargaming figures running on the smaller end especially where armored vehicles are used. Traditional modelers tend to stick to the more common 1:72-1:32 scales leaving the other sizes to the gamers.
As with all things, quality and price vary from manufacturer to manufacturer and the result of a model is often limited by its initial quality. Today many new model manufacturers take great lengths to make 1:72, 1:48, 1:35 and 1:32 scale models as highly detailed and realistic as possible. This, unfortunately, makes many of the older, still existing sets, less desirable for diorama making but still fun to build, especially as starter kits for a less experienced modeller. Many of these older kits can still be found online at a reasonable price and while they don't offer as many pieces or as highly detailed molding, they can still produce a respectable product after paint and proper weathering is administered.
Model aircraft and vehicle kits in even smaller scales will also often include "model figures," or can be purchased as accessories. There are also kits of the drivers and servicers of cars, and the series of figurines that stand in the streets and platforms of model railroads.
Model figures based on icons like Hello Kitty , as well as characters appearing in anime , manga , kaiju (monster) series, science fiction / fantasy films and video games , is a major part of otaku fandom . It's also a large part of the global animation merchandising market from Japan which is estimated to be worth around 663 billions Japanese yen. [ 1 ] Some hobbyists concentrate specifically on a certain type of figure, such as garage kits , gashapon (capsule toys), or PVC bishōjo (pretty girl) statues. Such figures prominently featured in work of modern artist Takashi Murakami . Through his company Kaikai Kiki , he has produced a number of limited designer toys to be sold in otaku oriented stores.
While many different companies manufacture and sell anime figures, prices for the same figure have large differences depending on the authenticity and quality of the figure. Authentic figures are normally figures of characters that are licensed by the creators, thus leading to significantly higher prices. Some of the most well known manufacturers for their consistency and quality are such as Good Smile Company , Aniplex , Hot Toys , Bandai and others. [ 2 ] [ better source needed ] Figures are usually classified as prize figures, scale figures and others, with prize figures being lower cost options often used in Claw crane games, while scale figures can cost several hundreds to thousands USD. [ 3 ] [ better source needed ]
A noodle stopper is a type of figurine based on manga , anime , or even video game characters which is ostensibly meant to secure by gravity the lid for ramen containers such that they do not boil over. Due to their ornate designs, they are often displayed as ordinary figurines. [ 4 ] ( Stopper is a misnomer , as they are not inserted into the container.)
Garage kit figures are produced by both amateurs and professionals, [ 5 ] and usually cast out of polyurethane resin . In Japan they often portray anime characters and in the US they are often movie monsters . Garage kits are usually produced in limited numbers and are more expensive than typical injection molded plastic figures and kits.
In the 50s and 60s plastic model kits such as cars, planes or space ships became common in the US. There were also cheap plastic models for the popular market of movie monsters, comic book heroes, and movie and television characters in 1:8 size (about 9 inches or 23 cm in height). These included monsters like Frankenstein , The Wolf Man , Dracula , and the Creature from the Black Lagoon . One of the largest producers of monster figures were the Aurora Plastics Corporation , who produced many thousands figures from each mould. This market disappeared and no firm since has produced anything to match their quantities. Instead smaller (3¾-inch or 10 cm) action figures of have taken over the popular market.
In the 1970s, Aurora's figure molds had been sold to Monogram and by the mid-to late 1970s, the models had been discontinued and were difficult to find in hobby stores.
In the mid-1980s some who were kids in the 1950s and 60s resumed their interest in the old Aurora monster models. An underground market developed through which enthusiasts could acquire the original plastic model kits. While the prices in the 50s and 60s had been only a few dollars, now the kits were selling for as much as $125 for some of the rarer monster models.
In the early to mid-1980s, hobbyists began creating their own garage kits of movie monsters, often without permission from copyright holders. They were usually produced in limited numbers and sold primarily by mail order and at toy and hobby conventions.
In the mid- to late 1980s, two model kit companies moved the monster model kit hobby toward the mainstream. Horizon Models in California and Screamin' Models in New York began licensing vinyl model kits of movie monsters. Horizon focused primarily on classic horror film characters (like Bride of Frankenstein , Invisible Man , The Phantom of the Opera ) and comic book characters (like Captain America and Iron Man ). Screamin' focused primarily on characters from more contemporary slasher movies like A Nightmare on Elm Street , Hellraiser and franchises like Star Wars and Mars Attacks . [ 6 ] Hobby stores began to carry these products in limited supply.
By the 1990s model kits were produced in the US, UK as well as Japan and distributed through hobby and comic stores. Large hobby companies like AMT - Ertl and Revell /Monogram (the same Monogram that bought the Aurora monster molds) began marketing vinyl model kits of movie monsters, the classic Star Trek characters, and characters from one of the Batman films. There was an unprecedented variety of licensed models figure kits.
In the late 1990s model kit sales went down. Hobby and comic stores and their distributors began carrying fewer garage kits or closed down. Producers like Horizon and Screamin' shut their doors.
As of 2009, there are two American garage kit magazines, Kitbuilders Magazine [ 7 ] and Amazing Figure Modeler , [ 8 ] and there are garage kit conventions held each year, like WonderFest USA in Louisville, Kentucky. [ 9 ]
Model figure collectors, like most hobby collectors , usually have a specific criterion for what they collect, such as Civil-War soldiers, or Warhammer gaming figures.
Specifically with an eye to collectors, manufacturers of collectable model figures make chase figures . This is a model figure that is released in limited amounts relative to the rest of an assortment, often something like "one chase figure for every two cases of regular product" or similar. This is comparable to the chase cards in the collectible card game industry. [ 10 ] [ 11 ] The name comes from the assumption that collectors, in their need to "collect them all" will put in more effort than usual to "chase" down these figures.
Generally speaking, chase figures are rare in toy lines aimed at youth markets, although there are occasionally shortpacked figures (shipped in lower numbers than other figures in its release cycle). Chase figures are more common in collector-oriented lines like Marvel Legends and WWE Classics . | https://en.wikipedia.org/wiki/Model_figure |
A Model maker is a professional Craftsperson who creates a three-dimensional representation of a design or concept. Most products in use and in development today first take form as a model. This "model" may be an exacting duplicate ( prototype ) of the future design or a simple mock-up of the general shape or concept. Many prototype models are used for testing physical properties of the design, others for usability and marketing studies.
Mock-ups are generally used as part of the design process to help convey each new iteration. Some model makers specialize in "scale models" that allow an easier grasp of the whole design or for portability of the model to a trade show or an architect or client's office. Other scale models are used in museum displays and in the movie special effects industry. Model makers work in many environments from private studio/shops to corporate design and engineering facilities to research laboratories. [ 1 ]
The model maker must be highly skilled in the use of many machines , such as manual lathes , manual mills , Computer Numeric Control (CNC) machines, lasers, wire EDM, water jet saws, tig welders, sheet metal fabrication tools and wood working tools. Fabrication processes model makers take part in are powder coating, shearing, punching, plating, folding, forming and anodizing. Some model makers also use increasingly automated processes, for example cutting parts directly with digital data from computer-aided design plans on a CNC mill or creating the parts through rapid prototyping . [ 2 ] Hand tools used by a model maker are an exacto knife, tweezers, sprue cutter, tape, glue, paint, and paint brushes. [ 3 ]
There are two basic processes used by the model maker to create models: additive and subtractive. Additive can be as simple as adding clay to create a form, sculpting and smoothing to the final shape. Body fillers, foam and resins are also used in the same manner. Most rapid prototyping technologies are based on the additive process, solidifying thin layered sections or slices one on top of each other. Subtractive is like whittling a solid block of wood or chiseling stone to the desired form. Most milling and other machining methods are subtractive, progressively using smaller and finer tools to remove material from the rough shape to get to the level of detail needed in the final model. [ 4 ]
Model makers may use a combination of these methods and technologies to create the model in the most expeditious manner. The parts are usually test fitted, then sanded and painted to represent the intended finish or look. Model makers are required to recreate many faux finishes like brick, stone, grass, molded plastic textures, glass, skin and even water. | https://en.wikipedia.org/wiki/Model_maker |
Model military vehicles are miniature versions of military vehicles . They range in size and complexity; from simplified small-scale models for wargaming , to large, super-detailed renditions of specific real-life vehicles.
The 'scale' is the proportion of actual size the replica or model represents. Scale is usually expressed as a ratio (e.g. '1:35') or as a fraction (e.g. '1/35'). In either case it conveys the notion that the replica or model is accurately scaled in all visible proportions from a full-size prototype object. Thus a 1:35 scale model tank is 1/35 the size of the actual vehicle upon which the model is based. Models generally make no attempt to replicate scale weight, only size.
The most popular scales, by far, are 1:35 and 1:72 . Other less-commonly used scales for commercially produced kits include: 1:1 , 1:6, 1:9 ("Traditional" scale), 1:12 , 1:16 (RC tanks, scale model kits), 1:24 , 1:25 , 1:30, 1:32 , 1:48 , 1:50 , 1:64 , 1:87 (railroad HO scale), 1:144 , 1:250 , 1:285 , 1:300 , and 1:350 . A relatively recent [ when? ] trend led by Tamiya is military vehicle kits in 1:48 scale – a popular scale for military aircraft models. The scale was formerly introduced by companies such as Aurora , and Bandai in the 1970s. However, the scale did not gain popularity mostly because of the accuracy and detail of the scale. Scratchbuilt models may be in any scale but tend to follow the most popular kit scales due to the ease of finding kit components which may be used in the scratchbuilt model.
Larger-scale models tend to incorporate higher levels of detail, but even smaller-scale models may be quite intricate.
Military vehicle modelers build a wide variety of models. Tanks and other armored fighting vehicles are the most popular subjects at model contests. Modelers also build ordnance, military trucks, tractors, half-tracks, artillery, and lighter vehicles such as jeeps and motorcycles. Models may be displayed in stand-alone mode, that is, with no base, or on a decorative base, often with a label of some kind. More elaborate bases may include scale scenery, intended to depict the setting in which the vehicle served. This trends towards the closely related hobby of diorama building.
Modelers tend to focus on vehicles from three eras: World War I, World War II, and the modern era . The first denotes armored vehicles from their inception into combat during the first World War until approximately 1939. Many vehicles of this time period may be considered to be experimental and few made major contributions to the few battles in which they took part.
Models depicting vehicles from the World War I era and the following interwar years are not as numerous as their later world war counterparts, but this is beginning to change in recent years. As of the centenary of the start of World War I, more manufacturers have begun to release kits World War I and interwar subjects: Takom releasing a range of British tanks such as the Mark I and Whippet ; Meng providing two variants of the FT-17 and the German A7V and Hobbyboss releasing an interwar Vickers Medium Mark II are all examples of this.
Vehicles used between 1939 and 1945 fall into the World War II category. Even though this era spans the shortest number of years, it is by far the most popular for armor modelers due to the enormous range of vehicles used and the vast improvements in armor technology. During the early part of the war, most armored vehicles were smaller, less heavily armored, and lightly armed. Major tank engagements early on convinced governments on all sides of the need for more survivable and deadlier vehicles. This means that there is a large variety of different subjects which were designed to fulfil different roles under different doctrines.
Any vehicle serving in a setting after 1945 is considered "modern." This encompasses a longer time span and very large number of armor designs from many countries.
Models may also be categorized by place of service, for example, US or Soviet. They may also be categorized by function, for example, combat engineering vehicles, recovery vehicles, etc. In all cases, the national and unit markings on the replica determine the era and user nationality. For example, a model of a Sherman tank , a World War II design, would be considered a 'modern' model if the tank were shown in Israeli markings from the Six-Day War . The same vehicle in World War II US Army markings would be considered a World War II Allied subject.
Models are generally built with historical accuracy in mind, and each model may represent many hours of research effort on the part of the modeler. Frequently, modelers display some of their research work alongside their model.
There is generally some crossover of modelers between the eras, though some focus solely on a specific era, country of origin or operation, or even on a specific vehicle and its variants.
Models are usually assembled from commercial kits (for exceptions see below). Typically, a model kit consists of a set of parts, instructions for their assembly, and a small sheet of markings in decal form.
Parts are produced by injection of liquid styrene plastic under very high pressure into complex steel molds. These molds are generally composed of two-halves that sandwich the parts; however, 'slide molds' may consist of many steel components to allow greater levels of detail to be incorporated into a single sprue. Once the plastic cools, it is removed from the mold. In the 1960s and 1970s, typical vehicle kits might contain 50 to 200 individual parts. Today it is common for a single vehicle kit to contain from 300 to 1200 parts. Each part must be carefully cut from the 'sprue' (the plastic channels that allow the plastic to flow into the mold and which hold the parts in place), cleaned of any flaws or mold marks, and then assembled.
Instructions consist of paper booklets or sheets supplied with each kit. Usually, instructions show drawings of the parts. A recent trend has been the use of photographs rather than drawings, but these types of instructions have not proven popular and may be declining in use. For a kit with hundreds of parts, good instructions are vital. Flaws in instructions are not uncommon.
Markings for the model usually are provided as decals .
Several companies produce armor model kits, the most famous of which are Airfix , Dragon Models Limited , Tamiya , Trumpeter , Academy , Hobby Fan , Italeri , Revell -Germany/Monogram and AFV Club . The focus of many manufacturers of late has been to increase the accuracy of their kits and provide alternative types of material such as photo etch details and turned metal barrels.
Completed models can be categorized generally into three classes: kits built 'out of the box', customized kits, and scratchbuilt models.
Models built 'out of the box' are built according to kit instructions, using no materials except those provided in the kit itself. In the past, there was some tendency to view 'out of the box' builds as simpler or of a lower standard of detail than modified kits (see below). However, recent trends in which kits contain over 1,000 individual pieces including parts from plastic, etched brass, and aluminum have given new meaning to the 'out of the box' build. Today, a stock kit can be very highly detailed.
Customized kits are typically built by more experienced modelers who take a kit and add components, either scratchbuilt or commercial conversion or aftermarket accessories. Such models may be more highly detailed than a straight build 'out of the box' though the trend to more detailed kits is decreasing the difference. The term 'kitbashing' denotes models built using parts from more than one kit to make a single, more accurate or different model. Many armor modelers engage in the use of aftermarket sets and built from scratch (scratchbuilt) parts to make their models more accurate or simply unique. In extreme, master-level cases, a model with hundreds of kit components may be detailed with several hundred additional commercial and home-fabricated parts to reach a very high level of realism.
Scratchbuilt models are those for which no kit exists; highly skilled modelers create their vehicle from sheet plastic and components they fabricate themselves. Some scratchbuilt models may contain a few commercial components, but typically it is a small proportion of all the model's parts.
Scratchbuilt models may also be made from brass and aluminum, cast in pewter (a low temp metal) and cast with 2-part resins in molds made of RTV rubber material. A scratchbuilding modeler should possess talents in the following areas: soldering, gluing, drilling, taping, grinding, sanding, cutting & shaping in metals and plastics, creating RTV molds (1,2 & 3 part types), painting & weathering, research of prototype material, casting in low temp metals, creating sketches and diagrams of what is being made, measuring in inches or millimeters, use of calipers and other specialized tools.
"Aftermarket" denotes any kit or detail set that is sold to replace existing kit parts in order to reproduce a more accurate model or simply a different version not otherwise available. The media used by aftermarket companies range from turned aluminum and brass, photo-etched steel or brass sheets, pre-bent brass wire, cast metals, and resin. Notable aftermarket companies include Formations, The Tank Workshop, Tank, Azimut, Eduard, Verlinden, Friulmodel, Legend, and Modelkasten.
Aftermarket markings are also available. Firms such as Archer Dry Transfers or Decalomaniacs produce stand-alone sheets of wet or dry transfer markings to allow the modeler to complete a different or more accurate variant.
Enthusiasts may pursue military vehicle modeling in its own right or as an adjunct to other military modeling . There is also some crossover with wargaming , diorama building, and re-enacting.
Models may be displayed on their own, on a base or as part of a diorama.
Many models are displayed with no base or other setting. Their wheels or track rest upon the shelf or table on which they are displayed. This display method is the easiest and cheapest, but has the disadvantage that the fragile model may be damaged when handled.
A simple wooden base adds an element of protection to the model because the model itself does not need to be handled, it can be moved by handling the base. Bases may also hold a plate with some information about the model, such as its title or designation, or some historical background. This kind of bases typically consist of a frame of wood or other material. Finishes on bases range from painted plastic to stained wood to simple landscaping. The disadvantage is that the base adds expense and time to the project.
A diorama is a more elaborate base with landscaping to provide a setting for the model, and often includes a story with figures . Dioramas have the same advantages and disadvantages of plain bases, but to a greater degree.
Models are often displayed in competition such as the AMPS annual show, or in club displays at hobby shops and other events.
Several organizations and publications exist to support and promote the hobby of modeling military vehicles. The Armor Modeling and Preservation Society or AMPS is an 800-plus member organization devoted to the hobby. The International Plastic Modellers' Society (or IPMS) supports modelers of all types including military vehicle modelers. The Miniature Armoured Fighting Vehicle Association (MAFVA, http://www.mafva.org/ ) is a UK-based association.
Commercial publications devoted to or including military vehicle modeling include AFVModeller, Military Miniatures in Review (MMiR), Armour Modelling, and Military Modelling. | https://en.wikipedia.org/wiki/Model_military_vehicle |
A model organism is a non-human species that is extensively studied to understand particular biological phenomena, with the expectation that discoveries made in the model organism will provide insight into the workings of other organisms. [ 1 ] [ 2 ] Model organisms are widely used to research human disease when human experimentation would be unfeasible or unethical . [ 3 ] This strategy is made possible by the common descent of all living organisms, and the conservation of metabolic and developmental pathways and genetic material over the course of evolution . [ 4 ]
Research using animal models has been central to most of the achievements of modern medicine. [ 5 ] [ 6 ] [ 7 ] It has contributed most of the basic knowledge in fields such as human physiology and biochemistry , and has played significant roles in fields such as neuroscience and infectious disease . [ 8 ] [ 9 ] The results have included the near- eradication of polio and the development of organ transplantation , and have benefited both humans and animals. [ 5 ] [ 10 ] From 1910 to 1927, Thomas Hunt Morgan 's work with the fruit fly Drosophila melanogaster identified chromosomes as the vector of inheritance for genes, [ 11 ] [ 12 ] and Eric Kandel wrote that Morgan's discoveries "helped transform biology into an experimental science". [ 13 ] Research in model organisms led to further medical advances, such as the production of the diphtheria antitoxin [ 14 ] [ 15 ] and the 1922 discovery of insulin [ 16 ] and its use in treating diabetes, which had previously meant death. [ 17 ] Modern general anaesthetics such as halothane were also developed through studies on model organisms, and are necessary for modern, complex surgical operations. [ 18 ] Other 20th-century medical advances and treatments that relied on research performed in animals include organ transplant techniques, [ 19 ] [ 20 ] [ 21 ] [ 22 ] the heart-lung machine, [ 23 ] antibiotics , [ 24 ] [ 25 ] [ 26 ] and the whooping cough vaccine. [ 27 ]
In researching human disease , model organisms allow for better understanding the disease process without the added risk of harming an actual human. The species of the model organism is usually chosen so that it reacts to disease or its treatment in a way that resembles human physiology , even though care must be taken when generalizing from one organism to another. [ 28 ] However, many drugs, treatments and cures for human diseases are developed in part with the guidance of animal models. [ 29 ] [ 30 ] Treatments for animal diseases have also been developed, including for rabies , [ 31 ] anthrax , [ 31 ] glanders , [ 31 ] feline immunodeficiency virus (FIV), [ 32 ] tuberculosis , [ 31 ] Texas cattle fever, [ 31 ] classical swine fever (hog cholera), [ 31 ] heartworm , and other parasitic infections . [ 33 ] Animal experimentation continues to be required for biomedical research, [ 34 ] and is used with the aim of solving medical problems such as Alzheimer's disease, [ 35 ] AIDS, [ 36 ] multiple sclerosis, [ 37 ] spinal cord injury, many headaches, [ 38 ] and other conditions in which there is no useful in vitro model system available.
Model organisms are drawn from all three domains of life, as well as viruses . One of the first model systems for molecular biology was the bacterium Escherichia coli ( E. coli ), a common constituent of the human digestive system. The mouse ( Mus musculus ) has been used extensively as a model organism and is associated with many important biological discoveries of the 20th and 21st centuries. [ 39 ] Other examples include baker's yeast ( Saccharomyces cerevisiae ), the T4 phage virus, the fruit fly Drosophila melanogaster , the flowering plant Arabidopsis thaliana , and guinea pigs ( Cavia porcellus ). Several of the bacterial viruses ( bacteriophage ) that infect E. coli also have been very useful for the study of gene structure and gene regulation (e.g. phages Lambda and T4 ). [ 40 ] Disease models are divided into three categories: homologous animals have the same causes, symptoms and treatment options as would humans who have the same disease, isomorphic animals share the same symptoms and treatments, and predictive models are similar to a particular human disease in only a couple of aspects, but are useful in isolating and making predictions about mechanisms of a set of disease features. [ 41 ]
The use of animals in research dates back to ancient Greece , with Aristotle (384–322 BCE) and Erasistratus (304–258 BCE) among the first to perform experiments on living animals. [ 42 ] Discoveries in the 18th and 19th centuries included Antoine Lavoisier 's use of a guinea pig in a calorimeter to prove that respiration was a form of combustion, and Louis Pasteur 's demonstration of the germ theory of disease in the 1880s using anthrax in sheep. [ 43 ]
Research using animal models has been central to most of the achievements of modern medicine. [ 5 ] [ 6 ] [ 7 ] It has contributed most of the basic knowledge in fields such as human physiology and biochemistry , and has played significant roles in fields such as neuroscience and infectious disease . [ 8 ] [ 9 ] For example, the results have included the near- eradication of polio and the development of organ transplantation , and have benefited both humans and animals. [ 5 ] [ 10 ] From 1910 to 1927, Thomas Hunt Morgan 's work with the fruit fly Drosophila melanogaster identified chromosomes as the vector of inheritance for genes. [ 11 ] [ 12 ] Drosophila became one of the first, and for some time the most widely used, model organisms, [ 44 ] and Eric Kandel wrote that Morgan's discoveries "helped transform biology into an experimental science". [ 13 ] D. melanogaster remains one of the most widely used eukaryotic model organisms. During the same time period, studies on mouse genetics in the laboratory of William Ernest Castle in collaboration with Abbie Lathrop led to generation of the DBA ("dilute, brown and non-agouti") inbred mouse strain and the systematic generation of other inbred strains. [ 45 ] [ 46 ] The mouse has since been used extensively as a model organism and is associated with many important biological discoveries of the 20th and 21st centuries. [ 39 ]
In the late 19th century, Emil von Behring isolated the diphtheria toxin and demonstrated its effects in guinea pigs. He went on to develop an antitoxin against diphtheria in animals and then in humans, which resulted in the modern methods of immunization and largely ended diphtheria as a threatening disease. [ 14 ] The diphtheria antitoxin is famously commemorated in the Iditarod race, which is modeled after the delivery of antitoxin in the 1925 serum run to Nome . The success of animal studies in producing the diphtheria antitoxin has also been attributed as a cause for the decline of the early 20th-century opposition to animal research in the United States. [ 15 ]
Subsequent research in model organisms led to further medical advances, such as Frederick Banting 's research in dogs, which determined that the isolates of pancreatic secretion could be used to treat dogs with diabetes . This led to the 1922 discovery of insulin (with John Macleod ) [ 16 ] and its use in treating diabetes, which had previously meant death. [ 17 ] John Cade 's research in guinea pigs discovered the anticonvulsant properties of lithium salts, [ 47 ] which revolutionized the treatment of bipolar disorder , replacing the previous treatments of lobotomy or electroconvulsive therapy. Modern general anaesthetics, such as halothane and related compounds, were also developed through studies on model organisms, and are necessary for modern, complex surgical operations. [ 18 ] [ 48 ]
In the 1940s, Jonas Salk used rhesus monkey studies to isolate the most virulent forms of the polio virus, [ 49 ] which led to his creation of a polio vaccine . The vaccine, which was made publicly available in 1955, reduced the incidence of polio 15-fold in the United States over the following five years. [ 50 ] Albert Sabin improved the vaccine by passing the polio virus through animal hosts, including monkeys; the Sabin vaccine was produced for mass consumption in 1963, and had virtually eradicated polio in the United States by 1965. [ 51 ] It has been estimated that developing and producing the vaccines required the use of 100,000 rhesus monkeys, with 65 doses of vaccine produced from each monkey. Sabin wrote in 1992, "Without the use of animals and human beings, it would have been impossible to acquire the important knowledge needed to prevent much suffering and premature death not only among humans, but also among animals." [ 52 ]
Other 20th-century medical advances and treatments that relied on research performed in animals include organ transplant techniques, [ 19 ] [ 20 ] [ 21 ] [ 22 ] the heart-lung machine, [ 23 ] antibiotics , [ 24 ] [ 25 ] [ 26 ] and the whooping cough vaccine. [ 27 ] Treatments for animal diseases have also been developed, including for rabies , [ 31 ] anthrax , [ 31 ] glanders , [ 31 ] feline immunodeficiency virus (FIV), [ 32 ] tuberculosis , [ 31 ] Texas cattle fever, [ 31 ] classical swine fever (hog cholera), [ 31 ] heartworm , and other parasitic infections . [ 33 ] Animal experimentation continues to be required for biomedical research, [ 34 ] and is used with the aim of solving medical problems such as Alzheimer's disease, [ 35 ] AIDS, [ 36 ] [ 53 ] [ 54 ] multiple sclerosis, [ 37 ] spinal cord injury, many headaches, [ 38 ] and other conditions in which there is no useful in vitro model system available.
Models are those organisms with a wealth of biological data that make them attractive to study as examples for other species and/or natural phenomena that are more difficult to study directly. Continual research on these organisms focuses on a wide variety of experimental techniques and goals from many different levels of biology—from ecology , behavior and biomechanics , down to the tiny functional scale of individual tissues , organelles and proteins . Inquiries about the DNA of organisms are classed as genetic models (with short generation times, such as the fruitfly and nematode worm), experimental models, and genomic parsimony models, investigating pivotal position in the evolutionary tree. [ 55 ] Historically, model organisms include a handful of species with extensive genomic research data, such as the NIH model organisms. [ 56 ]
Often, model organisms are chosen on the basis that they are amenable to experimental manipulation. This usually will include characteristics such as short life-cycle , techniques for genetic manipulation ( inbred strains, stem cell lines, and methods of transformation ) and non-specialist living requirements. Sometimes, the genome arrangement facilitates the sequencing of the model organism's genome, for example, by being very compact or having a low proportion of junk DNA (e.g. yeast , arabidopsis , or pufferfish ). [ 57 ]
When researchers look for an organism to use in their studies, they look for several traits. Among these are size, generation time , accessibility, manipulation, genetics, conservation of mechanisms, and potential economic benefit. As comparative molecular biology has become more common, some researchers have sought model organisms from a wider assortment of lineages on the tree of life.
The primary reason for the use of model organisms in research is the evolutionary principle that all organisms share some degree of relatedness and genetic similarity due to common ancestry . The study of taxonomic human relatives, then, can provide a great deal of information about mechanism and disease within the human body that can be useful in medicine. [ citation needed ]
Various phylogenetic trees for vertebrates have been constructed using comparative proteomics , genetics, genomics as well as the geochemical and fossil record. [ 58 ] These estimations tell us that humans and chimpanzees last shared a common ancestor about 6 million years ago (mya). As our closest relatives, chimpanzees have a lot of potential to tell us about mechanisms of disease (and what genes may be responsible for human intelligence). However, chimpanzees are rarely used in research and are protected from highly invasive procedures. Rodents are the most common animal models. Phylogenetic trees estimate that humans and rodents last shared a common ancestor ~80-100mya. [ 59 ] [ 60 ] Despite this distant split, humans and rodents have far more similarities than they do differences. This is due to the relative stability of large portions of the genome, making the use of vertebrate animals particularly productive. [ citation needed ]
Genomic data is used to make close comparisons between species and determine relatedness. Humans share about 99% of their genome with chimpanzees [ 61 ] [ 62 ] (98.7% with bonobos) [ 63 ] and over 90% with the mouse. [ 60 ] With so much of the genome conserved across species, it is relatively impressive that the differences between humans and mice can be accounted for in approximately six thousand genes (of ~30,000 total). Scientists have been able to take advantage of these similarities in generating experimental and predictive models of human disease. [ citation needed ]
There are many model organisms. One of the first model systems for molecular biology was the bacterium Escherichia coli , a common constituent of the human digestive system. Several of the bacterial viruses ( bacteriophage ) that infect E. coli also have been very useful for the study of gene structure and gene regulation (e.g. phages Lambda and T4 ). However, it is debated whether bacteriophages should be classified as organisms, because they lack metabolism and depend on functions of the host cells for propagation. [ 64 ]
In eukaryotes , several yeasts, particularly Saccharomyces cerevisiae ("baker's" or "budding" yeast), have been widely used in genetics and cell biology , largely because they are quick and easy to grow. The cell cycle in a simple yeast is very similar to the cell cycle in humans and is regulated by homologous proteins. The fruit fly Drosophila melanogaster is studied, again, because it is easy to grow for an animal, has various visible congenital traits and has a polytene (giant) chromosome in its salivary glands that can be examined under a light microscope. The roundworm Caenorhabditis elegans is studied because it has very defined development patterns involving fixed numbers of cells, and it can be rapidly assayed for abnormalities. [ 65 ]
Animal models serving in research may have an existing, inbred or induced disease or injury that is similar to a human condition. These test conditions are often termed as animal models of disease . The use of animal models allows researchers to investigate disease states in ways which would be inaccessible in a human patient, performing procedures on the non-human animal that imply a level of harm that would not be considered ethical to inflict on a human. [ citation needed ]
The best models of disease are similar in etiology (mechanism of cause) and phenotype (signs and symptoms) to the human equivalent. However complex human diseases can often be better understood in a simplified system in which individual parts of the disease process are isolated and examined. For instance, behavioral analogues of anxiety or pain in laboratory animals can be used to screen and test new drugs for the treatment of these conditions in humans. A 2000 study found that animal models concorded (coincided on true positives and false negatives) with human toxicity in 71% of cases, with 63% for nonrodents alone and 43% for rodents alone. [ 66 ]
In 1987, Davidson et al. suggested that selection of an animal model for research be based on nine considerations. These include
1) appropriateness as an analog, 2) transferability of information, 3) genetic uniformity of organisms, where applicable, 4) background knowledge of biological properties, 5) cost and availability, 6) generalizability of the results, 7) ease of and adaptability to experimental manipulation, 8) ecological consequences, and 9) ethical implications. [ 67 ]
Animal models can be classified as homologous, isomorphic or predictive. Animal models can also be more broadly classified into four categories: 1) experimental, 2) spontaneous, 3) negative, 4) orphan. [ 68 ]
Experimental models are most common. These refer to models of disease that resemble human conditions in phenotype or response to treatment but are induced artificially in the laboratory. Some examples include:
Spontaneous models refer to diseases that are analogous to human conditions that occur naturally in the animal being studied. These models are rare, but informative. Negative models essentially refer to control animals, which are useful for validating an experimental result. Orphan models refer to diseases for which there is no human analog and occur exclusively in the species studied. [ 68 ]
The increase in knowledge of the genomes of non-human primates and other mammals that are genetically close to humans is allowing the production of genetically engineered animal tissues, organs and even animal species which express human diseases, providing a more robust model of human diseases in an animal model.
Animal models observed in the sciences of psychology and sociology are often termed animal models of behavior . It is difficult to build an animal model that perfectly reproduces the symptoms of depression in patients. Depression, as other mental disorders , consists of endophenotypes [ 83 ] that can be reproduced independently and evaluated in animals. An ideal animal model offers an opportunity to understand molecular , genetic and epigenetic factors that may lead to depression. By using animal models, the underlying molecular alterations and the causal relationship between genetic or environmental alterations and depression can be examined, which would afford a better insight into pathology of depression. In addition, animal models of depression are indispensable for identifying novel therapies for depression. [ 84 ] [ 85 ]
Model organisms are drawn from all three domains of life, as well as viruses . The most widely studied prokaryotic model organism is Escherichia coli ( E. coli ), which has been intensively investigated for over 60 years. It is a common, gram-negative gut bacterium which can be grown and cultured easily and inexpensively in a laboratory setting. It is the most widely used organism in molecular genetics , and is an important species in the fields of biotechnology and microbiology , where it has served as the host organism for the majority of work with recombinant DNA . [ 86 ]
Simple model eukaryotes include baker's yeast ( Saccharomyces cerevisiae ) and fission yeast ( Schizosaccharomyces pombe ), both of which share many characters with higher cells, including those of humans. For instance, many cell division genes that are critical for the development of cancer have been discovered in yeast. Chlamydomonas reinhardtii , a unicellular green alga with well-studied genetics, is used to study photosynthesis and motility . C. reinhardtii has many known and mapped mutants and expressed sequence tags, and there are advanced methods for genetic transformation and selection of genes. [ 87 ] Dictyostelium discoideum is used in molecular biology and genetics , and is studied as an example of cell communication , differentiation , and programmed cell death .
Among invertebrates, the fruit fly Drosophila melanogaster is famous as the subject of genetics experiments by Thomas Hunt Morgan and others. They are easily raised in the lab, with rapid generations, high fecundity , few chromosomes , and easily induced observable mutations. [ 88 ] The nematode Caenorhabditis elegans is used for understanding the genetic control of development and physiology. It was first proposed as a model for neuronal development by Sydney Brenner in 1963, and has been extensively used in many different contexts since then. [ 89 ] [ 90 ] C. elegans was the first multicellular organism whose genome was completely sequenced, and as of 2012, the only organism to have its connectome (neuronal "wiring diagram") completed. [ 91 ] [ 92 ]
Arabidopsis thaliana is currently the most popular model plant. Its small stature and short generation time facilitates rapid genetic studies, [ 93 ] and many phenotypic and biochemical mutants have been mapped. [ 93 ] A. thaliana was the first plant to have its genome sequenced . [ 93 ]
Among vertebrates , guinea pigs ( Cavia porcellus ) were used by Robert Koch and other early bacteriologists as a host for bacterial infections, becoming a byword for "laboratory animal", but are less commonly used today. The classic model vertebrate is currently the mouse ( Mus musculus ). Many inbred strains exist, as well as lines selected for particular traits, often of medical interest, e.g. body size, obesity, muscularity, and voluntary wheel-running behavior. [ 94 ] The rat ( Rattus norvegicus ) is particularly useful as a toxicology model, and as a neurological model and source of primary cell cultures, owing to the larger size of organs and suborganellar structures relative to the mouse, while eggs and embryos from Xenopus tropicalis and Xenopus laevis (African clawed frog) are used in developmental biology, cell biology, toxicology, and neuroscience. [ 95 ] [ 96 ] Likewise, the zebrafish ( Danio rerio ) has a nearly transparent body during early development, which provides unique visual access to the animal's internal anatomy during this time period. Zebrafish are used to study development, toxicology and toxicopathology, [ 97 ] specific gene function and roles of signaling pathways.
Other important model organisms and some of their uses include: T4 phage (viral infection), Tetrahymena thermophila (intracellular processes), maize ( transposons ), hydras ( regeneration and morphogenesis ), [ 98 ] cats (neurophysiology), chickens (development), dogs (respiratory and cardiovascular systems), Nothobranchius furzeri (aging), [ 99 ] non-human primates such as the rhesus macaque and chimpanzee ( hepatitis , HIV , Parkinson's disease , cognition , and vaccines ), and ferrets ( SARS-CoV-2 ) [ 100 ]
The organisms below have become model organisms because they facilitate the study of certain characters or because of their genetic accessibility. For example, E. coli was one of the first organisms for which genetic techniques such as transformation or genetic manipulation has been developed. [ citation needed ]
The genomes of all model species have been sequenced , including their mitochondrial / chloroplast genomes. Model organism databases exist to provide researchers with a portal from which to download sequences (DNA, RNA, or protein) or to access functional information on specific genes, for example the sub-cellular localization of the gene product or its physiological role. [ citation needed ]
Many animal models serving as test subjects in biomedical research, such as rats and mice, may be selectively sedentary , obese and glucose intolerant . This may confound their use to model human metabolic processes and diseases as these can be affected by dietary energy intake and exercise . [ 118 ] Similarly, there are differences between the immune systems of model organisms and humans that lead to significantly altered responses to stimuli, [ 119 ] [ 120 ] [ 121 ] although the underlying principles of genome function may be the same. [ 121 ] The impoverished environments inside standard laboratory cages deny research animals of the mental and physical challenges are necessary for healthy emotional development. [ 122 ] Without day-to-day variety, risks and rewards, and complex environments, some have argued that animal models are irrelevant models of human experience. [ 123 ]
Mice differ from humans in several immune properties: mice are more resistant to some toxins than humans; have a lower total neutrophil fraction in the blood , a lower neutrophil enzymatic capacity, lower activity of the complement system , and a different set of pentraxins involved in the inflammatory process ; and lack genes for important components of the immune system, such as IL-8 , IL-37 , TLR10 , ICAM-3 , etc. [ 76 ] Laboratory mice reared in specific-pathogen-free (SPF) conditions usually have a rather immature immune system with a deficit of memory T cells . These mice may have limited diversity of the microbiota , which directly affects the immune system and the development of pathological conditions. Moreover, persistent virus infections (for example, herpesviruses ) are activated in humans, but not in SPF mice, with septic complications and may change the resistance to bacterial coinfections . "Dirty" mice are possibly better suitable for mimicking human pathologies. In addition, inbred mouse strains are used in the overwhelming majority of studies, while the human population is heterogeneous, pointing to the importance of studies in interstrain hybrid, outbred , and nonlinear mice. [ 76 ]
Some studies suggests that inadequate published data in animal testing may result in irreproducible research, with missing details about how experiments are done omitted from published papers or differences in testing that may introduce bias. Examples of hidden bias include a 2014 study from McGill University in Montreal, Canada which suggests that mice handled by men rather than women showed higher stress levels. [ 124 ] [ 125 ] [ 126 ] Another study in 2016 suggested that gut microbiomes in mice may have an impact upon scientific research. [ 127 ]
Ethical concerns, as well as the cost, maintenance and relative inefficiency of animal research has encouraged development of alternative methods for the study of disease. Cell culture, or in vitro studies, provide an alternative that preserves the physiology of the living cell, but does not require the sacrifice of an animal for mechanistic studies. Human, inducible pluripotent stem cells can [ citation needed ] also elucidate new mechanisms for understanding cancer and cell regeneration. Imaging studies (such as MRI or PET scans) enable non-invasive study of human subjects. Recent advances in genetics and genomics can identify disease-associated genes, which can be targeted for therapies.
Many biomedical researchers argue that there is no substitute for a living organism when studying complex interactions in disease pathology or treatments. [ 128 ] [ 129 ]
Debate about the ethical use of animals in research dates at least as far back as 1822 when the British Parliament under pressure from British and Indian intellectuals enacted the first law for animal protection preventing cruelty to cattle. [ 130 ] This was followed by the Cruelty to Animals Act 1835 and the Cruelty to Animals Act 1849 , which criminalized ill-treating, over-driving, and torturing animals. In 1876, under pressure from the National Anti-Vivisection Society , the Cruelty to Animals Act 1849 was amended to include regulations governing the use of animals in research. This new act stipulated that 1) experiments must be proven absolutely necessary for instruction, or to save or prolong human life; 2) animals must be properly anesthetized; and 3) animals must be killed as soon as the experiment is over. Today, these three principles are central to the laws and guidelines governing the use of animals and research. In the U.S., the Animal Welfare Act of 1970 (see also Laboratory Animal Welfare Act ) set standards for animal use and care in research. This law is enforced by APHIS's Animal Care program. [ 131 ]
In academic settings in which NIH funding is used for animal research, institutions are governed by the NIH Office of Laboratory Animal Welfare (OLAW). At each site, OLAW guidelines and standards are upheld by a local review board called the Institutional Animal Care and Use Committee (IACUC). All laboratory experiments involving living animals are reviewed and approved by this committee. In addition to proving the potential for benefit to human health, minimization of pain and distress, and timely and humane euthanasia, experimenters must justify their protocols based on the principles of Replacement, Reduction and Refinement. [ 132 ]
"Replacement" refers to efforts to engage alternatives to animal use. This includes the use of computer models, non-living tissues and cells, and replacement of "higher-order" animals (primates and mammals) with "lower" order animals (e.g. cold-blooded animals, invertebrates) wherever possible. [ 133 ]
"Reduction" refers to efforts to minimize number of animals used during the course of an experiment, as well as prevention of unnecessary replication of previous experiments. To satisfy this requirement, mathematical calculations of statistical power are employed to determine the minimum number of animals that can be used to get a statistically significant experimental result. [ citation needed ]
"Refinement" refers to efforts to make experimental design as painless and efficient as possible in order to minimize the suffering of each animal subject. [ citation needed ] | https://en.wikipedia.org/wiki/Model_organism |
Model robots are model figures with origins in the Japanese anime genre of mecha . The majority of model robots are produced by Bandai and are based on the Gundam anime metaseries . This has given rise to the hobby's common name in Japan, Gunpla (or gan-pura , a Japanese portmanteau of "Gundam" and "plastic model"). Though there are exceptions, the model robot genre is dominated by anime tie-ins, with anime series and movies frequently serving as merchandising platform.
Gundam kits are the most common and popular variety of mecha models exemplifying the general characteristics of models in the genre. Gundam kits are typically oriented toward beginners, and most often feature simple assembly, simple designs, and rugged construction—less durable than a pre-assembled toy, but more durable than a true scale model. The result is that the majority of Gundam kits feature hands and other parts that favor poseability or easy assembly over accurate shape. They may also exhibit various draft-angle problems, and features like antennae that are oversized to prevent breakage. [ 1 ] For the most part, other kit lines and other kit manufacturers in the genre follow suit, though there are exceptions.
Because the subjects of model robot kits are typically humanoid and/or possess limbs, joints are required in order to make the finished model poseable. For decades, poly-caps were and still are used for this purpose, although they tend to degrade over time and thus have been less frequently used since the 2010s. Hard plastic joints generally exhibit greater friction than polyvinyl joints, and are similarly more durable than polystyrene joints. ABS joints, however, require greater precision in tooling [ 2 ] [ 3 ] to ensure easy assembly, and in some cases, they require screws and a small gap between parts.
One distinctive feature of model robot kits since the 1990s, as opposed to most other plastic model kits, is that they are molded in color: each part generally is made of a colored plastic corresponding to its intended color on the finished model. Bandai in particular has become well-known for its use of multi-color molding, which allows parts of different colors to be molded on the same sprue. [ 4 ] In some cases, compromises have to be made - for instance, molding a part in an incorrect color, or a two-colored part in only one color - to ensure that the model is structurally stable and not overly complex, particularly when the intended retail price is low. One criterion by which enthusiasts assess the quality of a kit is its color-accuracy - that is to say, the correspondence between the molded color of the parts and the intended color of the finished model.
Anime mecha subjects such as Gundam are most often portrayed as being between 15 and 20 meters tall, so the kits are scaled in a manner that brings the subject to an economical and manageable size. For machines in this size range, scales of 1:100 and 1:144 are most common, with 1:60 being reserved for larger (and usually more expensive or elaborate) kits. For smaller subjects, scales such as 1:20, 1:35, and 1:72 are also common. Bandai kits will commonly use a fairly extensive redesign, rather than the original design itself. Some of this inconsistency in representation may be due to the inherent difficulties in turning a 2-D cel-animated design into a 3-D design. Additionally, newer versions of the same model could be very different from an older version, due to better manufacturing technologies.
Gunpla kits are also sorted by a grading scale, [ 5 ] which signals the complexity, and sometimes the art style, of the model.
Gunpla is a major hobby in Japan, with entire magazines dedicated to variations on Bandai models. As mecha are fictional humanoid objects, there is considerable leeway for custom models and " kitbashes. " A large amount of artistry goes into action poses and personalized variations on classic machines. There is also a market for custom resin kits which fill in gaps in the Bandai model line.
Gundam is not the only line of model robots. Eureka Seven , Neon Genesis Evangelion , Patlabor , Aura Battler Dunbine and Heavy Metal L-Gaim , to name a few, are all represented by Bandai model lines. Other manufacturers, such as Hasegawa , Wave , and Kotobukiya , have in recent years offered products from other series, such as Macross , Votoms , Five Star Stories , Armored Core , Virtual-On , Zoids , and Maschinen Krieger , with sales rivaling Bandai's most popular products. | https://en.wikipedia.org/wiki/Model_robot |
A model steam engine is a small steam engine not built for serious use. Often they are built as an educational toy for children, in which case it is also called a toy steam engine, or for live steam enthusiasts. Between the 18th and early 20th centuries, demonstration models were also in use at universities and engineering schools, frequently designed and built by students as part of their curriculum. [ 1 ]
Model steam engines have been made in many forms by a number of manufacturers, but building model steam engines from scratch is popular among adult steam enthusiasts, although this generally requires access to a lathe and/or milling machine . [ 2 ] Those without a lathe can alternatively purchase prefabricated parts. [ 3 ]
In the late 19th century, manufacturers such as German toy company Bing introduced the two main types of model/toy steam engines, namely stationary engines with accessories that were supposed to mimic a 19th-century factory, [ 4 ] and mobile engines such as steam locomotives and boats . Later, especially in the early 20th century, steam rollers , fire engines , traction engines and steam wagons began to appear. At the peak of their popularity, around the mid 20th century, there were hundreds of companies making steam toys and models. Today, companies such as Wilesco (Germany), Mamod (UK), and Jensen (US) continue to produce model/toy steam engines.
Toy steam engines will commonly have fewer features (such as mechanical lubricators or governors ), and operate at lower pressures, while model steam engines will place more emphasis on similarity to life-sized engines. Manufacturers such as Wilesco sell both simple toy engines for beginners (e.g. the D3) and more intricate model engines that are meant to be used to drive things like workshops or boats. [ 5 ]
Model steam engines typically use hexamine fuel tablets , methylated spirits (aka meths or denatured alcohol), butane gas, or electricity to heat the boiler. Cylinders are either oscillating ( single-acting or double-acting ) or fixed cylinder using slide-valves , piston valves or poppet valves (normally double-acting). [ 6 ] Spring safety valves and steam whistles are other common features of model steam engines. Some stationary engines also have feedwater pumps to replenish boiler water, allowing them to run indefinitely as long as sufficient fuel is available. | https://en.wikipedia.org/wiki/Model_steam_engine |
Model synthesis (also wave function collapse or 'wfc' ) is a family of constraint-solving algorithms commonly used in procedural generation , especially in the video game industry .
Some video games known to have utilized variants of the algorithm include Bad North , Townscaper , and Caves of Qud .
The first example of this type of algorithm was described by Paul Merrell, who termed it 'model synthesis' first in his 2007 i3D paper [ 1 ] and also presented at the 2008 SIGGRAPH conference and his 2009 PhD thesis. [ 2 ] The name 'wave function collapse' later became the popular name for a variant of that algorithm, after an implementation by Maxim Gumin was published in 2016 on a GitHub repository with that name. [ 3 ] Gumin's implementation significantly popularised this style of algorithm, with it becoming widely adopted and adapted by technical artists and game developers over the following years. [ 3 ]
There were a number of inspirations to Gumin's implementation, including Merrell's PhD dissertation, and convolutional neural network style transfer. [ 4 ] [ 5 ] The popular name for the algorithm, 'wave function collapse', is from an analogy drawn between the algorithm's method and the concept of superposition and observation in quantum mechanics . [ 6 ] [ 7 ] Some innovations present in Gumin's implementation included the usage of overlapping patterns, allowing a single image to be used as an input to the algorithm. [ 8 ]
Some have speculated that the reason Gumin's implementation proved more popular than Merrell's, may have been due to the 'model synthesis' implementation's lower accessibility, its 3D focus, or perhaps the general public's computing constraints at the time. [ 9 ]
One of the differences between Merrell & Gumin's implementation and 'wave function collapse' lies in the decision of which cell to 'collapse' next. Merrell's implementation uses a scanline approach, whereas Gumin's always selects as next cell the one with the lowest number of possible outcomes. [ 10 ]
The WFC or 'model synthesis' algorithm has some variants. [ 6 ] Gumin and Merrell's implementations are described below, and other variants are noted:
Merrell's earlier implementation is substantially the same as Gumin's with some minor differences.
(1) In Merrell's version, there is no requirement to select the cell with the lowest number of possible output states for collapse. Instead, a scanline approach is adopted. According to Merrell, this results in a lower failure rate of the model without any negative effect on quality. [ 10 ] Some commentators have noted however that the scanline approach to 'collapse' tends to result in directional artifacts. [ 11 ]
(2) Merrell's approach performs the algorithm in chunks, rather than all-at-once. This approach greatly reduces the failure rate for many large complex models; especially in a 3D space. [ 10 ]
In April 2023 Shaad Alaka and Rafael Bidarra of Delft University proposed 'Hierarchical Semantic wave function collapse'. Essentially, the algorithm is modified to work beyond simple, unstructured sets of tiles. Prior to their work, all WFC algorithm variants operated on a flat set of tile choices per cell. [ 12 ]
Their generalised approach organizes tile-sets into a hierarchy, consisting of abstract nodes called 'meta-tiles', and terminating nodes called 'leaf tiles'. [ 13 ] For example, on the first pass, WFC might make a certain tile a meta-tile of 'castle' type; which on a second pass will be collapsed into other tiles based on a rule, e.g. a 'wall' or 'grass' tile.
This software article is a stub . You can help Wikipedia by expanding it .
This video game –related article is a stub . You can help Wikipedia by expanding it .
This algorithms or data structures -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Model_synthesis |
A model transformation , in model-driven engineering , is an automated way of modifying and creating platform-specific model from platform-independent ones. An example use of model transformation is ensuring that a family of models is consistent, in a precise sense which the software engineer can define. The aim of using a model transformation is to save effort and reduce errors by automating the building and modification of models where possible.
Model transformations can be thought of as programs that take models as input. There is a wide variety of kinds of model transformation and uses of them, which differ in their inputs and outputs and also in the way they are expressed.
A model transformation usually specifies which models are acceptable as input, and if appropriate what models it may produce as output, by specifying the metamodel to which a model must conform.
Model transformations and languages for them have been classified in many ways. [ 1 ] [ 2 ] [ 3 ] Some of the more common distinctions drawn are:
In principle a model transformation may have many inputs and outputs of various types; the only absolute limitation is that a model transformation will take at least one model as input. However, a model transformation that did not produce any model as output would more commonly be called a model analysis or model query.
Endogenous transformations are transformations between models expressed in the same language. Exogenous transformations are transformations between models expressed using different languages. [ 4 ] For example, in a process conforming to the OMG Model Driven Architecture , a platform-independent model might be transformed into a platform-specific model by an exogenous model transformation.
A unidirectional model transformation has only one mode of execution: that is, it always takes the same type of input and produces the same type of output. Unidirectional model transformations are useful in compilation-like situations, where any output model is read-only. The relevant notion of consistency is then very simple: the input model is consistent with the model that the transformation would produce as output, only.
For a bidirectional model transformation, the same type of model can sometimes be input and other times be output. Bidirectional transformations are necessary in situations where people are working on more than one model and the models must be kept consistent. Then a change to either model might necessitate a change to the other, in order to maintain consistency between the models. Because each model can incorporate information which is not reflected in the other, there may be many models which are consistent with a given model. Important special cases are:
It is particularly important that a bidirectional model transformation has appropriate properties to make it behave sensibly: for example, not making changes unnecessarily, or discarding deliberately made changes. [ 5 ]
A model transformation may be written in a general purpose programming language, but specialised model transformation languages are also available. Bidirectional transformations, in particular, are best written in a language that ensures the directions are appropriately related. The OMG -standardised model transformation languages are collectively known as QVT .
In some model transformation languages, for example the QVT languages, a model transformation is itself a model, that is, it conforms to a metamodel which is part of the model transformation language's definition. This facilitates the definition of Higher Order Transformation s (HOTs), [ 6 ] i.e. transformations which have other transformations as input and/or output. | https://en.wikipedia.org/wiki/Model_transformation |
A model transformation language in systems and software engineering is a language intended specifically for model transformation .
The notion of model transformation is central to model-driven development . A model transformation, which is essentially a program which operates on models, can be written in a general-purpose programming language, such as Java . However, special-purpose model transformation languages can offer advantages, such as syntax that makes it easy to refer to model elements. For writing bidirectional model transformations, which maintain consistency between two or more models, a specialist bidirectional model transformation language is particularly important, because it can help avoid the duplication that would result from writing each direction of the transformation separately.
Currently, most model transformation languages are being developed in academia. The OMG has standardised a family of model transformation languages called QVT , but the field is still immature. [ 1 ]
There are ongoing debates regarding the benefits of specialised model transformation languages, compared to the use of general-purpose programming languages (GPLs) such as Java . [ 2 ] While GPLs have advantages in terms of more widely-available practitioner knowledge and tool support, the specialised transformation languages do provide more declarative facilities and more powerful specialised features to support model transformations. [ 3 ] | https://en.wikipedia.org/wiki/Model_transformation_language |
Aspen Plus, Aspen HYSYS, ChemCad and MATLAB, PRO are the commonly used process simulators for modeling , simulation and optimization of a distillation process in the chemical industries. [ 1 ] [ 2 ] Distillation is the technique of preferential separation of the more volatile components from the less volatile ones in a feed followed by condensation . The vapor produced is richer in the more volatile components. The distribution of the component in the two phase is governed by the vapour-liquid equilibrium relationship. In practice, distillation may be carried out by either two principal methods. The first method is based on the production of vapor boiling the liquid mixture to be separated and condensing the vapors without allowing any liquid to return to the still. There is no reflux . The second method is based on the return of part of the condensate to still under such conditions that this returning liquid is brought into intimate contact with the vapors on their way to condenser. [ 3 ] [ 4 ]
Chemical Process modeling is a technique used in chemical engineering process design . Process modeling is defined as the physical , mathematical or logical representation of the real process, system or phenomena using model library present in the process simulator software. In this technique by using process simulator software we define a system of interconnected components. A system is defined as group of object that are joined together in some regular order or interdependence toward the accomplishment of some purpose. Which system are then solved so that the steady-state or dynamic behavior of the system can be predicted. Components of the system and connections are represented as a process flow diagram . [ 5 ] A flow diagram for the ammonia process (Finlayson, 2006) is shown in figure 1 below using aspen plus software.
The most important result of developing of mathematical model of chemical engineering system is the understanding that is gained what really make the process tick. Mathematical models can be useful in all phase of chemical engineering from research and development to plant operations and even in business and economics studies. The basis for the mathematical models are the fundamentals physical and chemical law, such as the laws of conservation of mass , energy and momentum , degree of freedom . Mathematical modeling is very much an art. [ 6 ] It takes experience, practice and brain power to be a good mathematical modeler.
A simulation is the representation of the real world process or system over a period of time. Simulation can be done by hand or on a computer , simulation involves the generation of artificial history of the system and the observation of artificial history to draw inferences concerning of the operating characteristic of the real system. Thus, simulation modelling can be used both as an analysis tool for predicating the effect of changes to existing system and as a design tool to predict the performance of new system under the varying set of circumstances. [ 7 ] Process simulation describes processes flow diagram where various unit operations are present and connected by product streams.
It is extensively used both in educational arena and industry to predicate the behavior of a process using material balance equations, equilibrium relationship, reaction kinetics, etc. [ 6 ]
In batch distillation , the feed is charged to the still pot to which heat is supplied continuously through a steam jacket or a steam coil. As the mixture boils, it generates a vapour richer in the more volatile. But as boiling continue, concentration of more volatile in the liquid decrease. It is generally assumed that equilibrium vaporization occurs in the still. The vapour is led to a condenser and the condensate or the top product is collected in the receiver. At the beginning, the condensate will be pretty rich in the more volatiles, but the concentrations of the more volatiles in it decrease as the condensate keep on accumulating in the receiver. The condensate is usually withdrawn intermittently having products or cuts of different concentrations. Batch distillation is used when the feed rate is not large enough to justify installation of a continuous distillation unit. It may also be used when the constituents greatly differ in volatility. [ 8 ] [ 9 ] Figure 1 show the batch distillation setup.
Let L be the moles of material in the still and x be the concentration of the volatile component (i.e. A) and let the moles of accumulated condensate be D. Concentration of the equilibrium vapour is Over a small time, the change in the amount of liquid in the still is d L {\displaystyle dL} and the amount of vapour withdrawn is d D {\displaystyle dD} . The following differential mass balance equation may be written as:Let L be the moles of material in the still and x be the concentration of the volatile component (i.e. A) and let the moles of accumulated condensate be D. Concentration of the equilibrium vapour is Over a small time, the change in the amount of liquid in the still is d L {\displaystyle dL} and the amount of vapour withdrawn is d D {\displaystyle dD} . The following
differential mass balance equation may be written as:
Total material balance: − d L {\displaystyle -dL} = d D {\displaystyle dD} ----- (i)
Component A balance: − d ( L x ) = y ∗ d D {\displaystyle -d(Lx)=y^{*}dD} ----- (ii) ⟶ {\displaystyle \longrightarrow } − L d X = y ∗ d D + x d L = y ∗ d D − x d D = ( y ∗ − x ) d D {\displaystyle -LdX=y^{*}dD+xdL=y^{*}dD-xdD=(y^{*}-x)dD} ----- (iii)
Equation (i) means that the total amount of vapou r generated must be equal to the decrease in the total amount of liquid. Similarly, equation (ii) means that loss in the number of moles of A from the still because of vaporization is the same as the amount of A in the small amount of vapour generated.
Putting d D = − d L {\displaystyle dD=-dL} in Equation (iii) and rearranging,
d L L {\displaystyle {\frac {dL}{L}}} = d x y ∗ − x {\displaystyle {\frac {dx}{y^{*}-x}}} ------(iv)
If distillation starts with F moles of feed of concentration and continues till the amount of liquid reduces to W moles (composition =x w ), the above equation can be integrated to give
∫ F W d L L {\displaystyle \textstyle \int \limits _{F}^{W}\displaystyle {\frac {dL}{L}}} = ∫ x F x w d x y ∗ − x {\displaystyle \textstyle \int \limits _{x_{F}}^{x_{w}}\displaystyle {\frac {dx}{y^{*}-x}}} ⟶ ln F W {\displaystyle \longrightarrow \ln {\frac {F}{W}}} = ∫ x w x F d x y ∗ − x {\displaystyle \textstyle \int \limits _{x_{w}}^{x_{F}}\displaystyle {\frac {dx}{y^{*}-x}}} ------(v)
Equation (v) is the basic equation of batch distillation and is called as the Rayleigh equation [ 10 ] . Rayleigh equation is used for calculation of data in the batch distillation column.
During the 1970s, the research have develop a novel technology at the Massachusetts Institute of Technology (MIT) with United States Department of Energy funding. The undertaking known as the Advanced System for Process Engineering (ASPEN) Project, was originally intended to design nonlinear simulation software that could aid in development of synthetic fuels . In 1981, AspenTech , a publicly traded company was founded to commercialize the simulation software package. Aspen Tech went public in October 1994 and has acquired 19 industry-leading companies as a part of its mission to offer complete, integrated solution to the process industries.
As the complexity of a plant integrated with the several process unit increase, solving a large equation set becomes a challenge. In this situation, we usually use the process flowsheets simulator.
The sophisticated Aspen Software tool can simulate large process with a high degree of accuracy. It has a model library that includes mixers, splitters, as phase separator, heat exchanger, distillation columns, and reactor pressure changers manipulators, etc. By interconnecting several unit operations, we are able to develop a process flow diagram (PFD) for a complete plant. To solve the model structure of either a single unit of a chemical plant, required Fortran code are built-in in the Aspen simulator.
Aspen simulator has been developed for the simulation of wide variety of processes such as chemical and petrochemical, petroleum refining, polymer, and coal based processes. [ 11 ]
Nowadays, different Aspen package are available for simulations with promising performance. Briefly, some of them are presented below.
Aspen Plus – This type of process simulator is used for steady state simulation of chemicals, petrochemicals and petroleum industries. It is also used for performance monitoring, design, optimization and business planning.
Aspen Dynamics –This type of process simulator is used for dynamics study and closed loop control of several process industries. Aspen Dynamics is integrated with Aspen plus.
Aspen Batch CAD – this simulator is typically used for batch processing , reaction and distillations. It allow us to derive reaction and kinetic information form the experimental data to create a process simulation.
Aspen Chromatography -This is a dynamic simulation software package used for both batch chromatography and chromatography simulated moving bed processes.
Aspen Properties - It is useful for the thermophysical properties calculation.
Aspen Polymer Plus – It is a modeling tool for steady state and dynamic simulation and optimization of polymer processes. This is available within Aspen Plus or Aspen Properties rather than via an external menu.
Aspen HYSIS – this process modeling package is typically used for steady state simulation, performance monitoring, design, optimization and business planning for petroleum refining , and oil and gas industries. [ 2 ]
Aspen simulate the performance of the designed process. A solid understanding of the underlying chemical engineering principles is needed to supply reasonable value of input parameters and analyse the result obtained. In addition to the process flow diagram , required input information to simulate a process are: setup, components properties, streams and blocks. [ 12 ]
The BatchFrac is rigorous model used for simulation of batch distillation column present in the model library of the software. It also includes the reactions occurred in any stage of
the separator. BatchFrac model does not consider column hydraulics, and there is negligible vapour holdup and constant liquid holdup. Modeling and simulation of batch distillation unit is done with the help of one of the most important process simulators (aspen plus) used in chemical industry with the following data given in the table and check the simulation result.
Various steps are involved in the simulation of batch distillation column using aspen plus software is :
Temperature = 373 K, Pressure=1 bar
Flow basis = Volume( 50 L/hr)
Composition type = mole fraction
Ethanol - 0.5, Water – 0.5 | https://en.wikipedia.org/wiki/Modeling_and_simulation_of_batch_distillation_unit |
Polymer crystals have different properties than simple atomic crystals. They possess high density and long range order . They do not possess isotropy , and therefore are anisotropic in nature, which means they show anisotropy and limited conformation space. However, just as atomic crystals have lattices , polymer crystals also exhibit a periodic structure called a lattice, which describes the repetition of the unit cells in the space. The simulation of polymer crystals is complex and not taken from only one state but from solid -state and fluid -state physics as well. Polymer crystals have unit cells that consist of tens of atoms, while the molecules themselves comprise 10 4 To 10 6 atoms.
There are two methods for the study of polymer crystals: 1) optimization methods and 2) sampling methods. Optimization methods have some advantages over the sampling method, such as the localization of crystals in phase space. Sampling methods generally cannot localize the crystals, and thus there is no need of the assumptions of localization. Optimization methods include molecular mechanics and lattice dynamics and sampling methods include the Monte Carlo method and molecular dynamics . A brief discussion regarding the methods are as follows:
There is a variety of methods for studying polymer crystals by molecular simulation. It is especially important in polymer crystals to be cognizant of the limitations imposed by either the assumptions on which a method is based or the robustness of the simulation method. [ 1 ] | https://en.wikipedia.org/wiki/Modeling_of_polymer_crystals |
A modeling perspective in information systems is a particular way to represent pre-selected aspects of a system. Any perspective has a different focus, conceptualization, dedication and visualization of what the model is representing.
The traditional way to distinguish between modeling perspectives is structural, functional and behavioral/processual perspectives. This together with rule, object, communication and actor and role perspectives is one way of classifying modeling approaches. [ 1 ]
This approach concentrates on describing the static structure. The main concept in this modeling perspective is the entity, this could be an object, phenomena, concept, thing etc.
The data modeling languages have traditionally handled this perspective, examples of such being:
Looking at the ER-language we have the basic components:
Looking at the generic semantic modeling language we have the basic components:
The functional modeling approach concentrates on describing the dynamic process. The main concept in this modeling perspective is the process, this could be a function, transformation, activity, action, task etc. A well-known example of a modeling language employing this perspective is data flow diagrams.
The perspective uses four symbols to describe a process, these being:
Now, with these symbols, a process can be represented as a network of these symbols.
This decomposed process is a DFD, data flow diagram.
Behavioral perspective gives a description of system dynamics. The main concepts in behavioral perspective are states and transitions between states. State transitions are triggered by events. State Transition Diagrams (STD/STM), State charts and Petri-nets are some examples of well-known behaviorally oriented modeling languages. Different types of State Transition Diagrams are used particularly within real-time systems and telecommunications systems.
Rule perspective gives a description of goals/means connections. The main concepts in rule perspective are rule, goal and constraint. A rule is something that influences the actions of a set of actors. The standard form of rule is “IF condition THEN action/expression”. Rule hierarchies (goal-oriented modeling), Tempora and Expert systems are some examples of rule oriented modeling.
The object-oriented perspective describes the world as autonomous, communicating objects. An object is an “entity” which has a unique and unchangeable identifier and a local state consisting of a collection of attributes with assignable values. The state can only be manipulated with a set of methods defined on the object. The value of the state can only be accessed by sending a message to the object to call on one of its methods. An event is when an operation is being triggered by receiving a message, and the trace of the events during the existence of the object is called the object’s life cycle or the process of an object. Several objects that share the same definitions of attributes and operations can be parts of an object class. The perspective is originally based on design and programming of object oriented systems. Unified Modelling Language (UML) is a well known language for modeling with an object perspective.
This perspective is based on language/action theory from philosophical linguistics . The basic assumption in this perspective is that person/objects cooperate on a process/action through communication within them.
An illocutionary act consists of five elements: Speaker, hearer, time, location and circumstances. It is a reason and goal for the communication, where the participations in a communication act is oriented towards mutual agreement. In a communication act, the speaker generally can raise three claims: truth (referring an object), justice (referring a social world of the participations) and claim to sincerity (referring the subjective world of the speaker).
Actor and role perspective is a description of organisational and system structure. An actor can be defined as a phenomenon that influences the history of another actor, whereas a role can be defined as the behaviour which is expected by an actor, amongst other actors, when filling the role. Modeling within these perspectives is based both on work with object-oriented programming languages and work with intelligent agents in artificial intelligence . I* is an example of an actor oriented language. | https://en.wikipedia.org/wiki/Modeling_perspective |
Distillation is a process in which we separate components of different vapour pressure. One fraction leaves overhead and is condensed to distillate and the other is the bottom product. The bottom product is mostly liquid while the overhead fraction can be vapour or an aerosol .
This method requires the components to have different volatility to be separated.
The column consists of three sections: a stripping section, a rectification section, and a feed section.
For rectification and stripping a countercurrent liquid phase must flow through the column, so that liquid and vapour can contact each other on each stage.
The distillation column is fed with a mixture containing the mole fraction xf of the desired compound. The overhead mixture is a gas or an aerosol which contains the mole fraction xD of the desired compound and the bottom product contains a mixture with the fraction xB of the desired compound.
An overhead condenser is a heat exchange equipment used for condensing the mixture leaving the top of the column. Either cooling water or air is used as a cooling agent.
An overhead accumulator is a horizontal pressure vessel containing the condensed mixture.
Pumps can be used to control the reflux to the column.
A Reboiler produces the vapour stream in the distillation column. It can be used internally and externally.
The total molar hold up in the nth tray Mn is considered constant.
The imbalances in the input and output flows are taken into account for in the component and the heat balance equations.
Flow rate of the liquid phase and mole fraction of the desired compound in it are L n + 1 {\displaystyle L_{n+1}} and X n + 1 {\displaystyle X_{n+1}} .
Flow rate of the vapour phase and mole fraction of the desired compound in it are V n − 1 {\displaystyle V_{n-1}} and Y n − 1 {\displaystyle Y_{n-1}} .
Flow rate of the liquid phase and molar fractions of the desired compound in it are v n {\displaystyle v_{n}} and y n {\displaystyle y_{n}} .
Flow rate of the vapour phase and molar fractions of the desired compound in it are L n {\displaystyle L_{n}} and x n {\displaystyle x_{n}} .
D t ( M n x n ) = L n + 1 x n + 1 − L n x n + V n − 1 y n − 1 − V n y n {\displaystyle D_{t}(M_{n}x_{n})=L_{n+1}x_{n+1}-L_{n}x_{n}+V_{n-1}y_{n-1}-V_{n}y_{n}} , with D t := d / d t {\displaystyle D_{t}:=d/dt}
By differentiating and substituting above equation we get: D t ( x n ) = [ L n + 1 x n + 1 + V n − 1 y n − 1 − ( L n + 1 ) + V n − ) ) x n − V n ( y n − x n ) ] / M n {\displaystyle D_{t}(x_{n})=[L_{n+1}x_{n+1}+V_{n-1}y_{n-1}-(L_{n+1})+V_{n-}))x_{n}-V_{n}(y_{n}-x_{n})]/M_{n}}
D t ( M n h n ) = h n + 1 L n + 1 − h n L n + H n − 1 V n − 1 − H n V n {\displaystyle D_{t}(M_{n}h_{n})=h_{n+1}L_{n+1}-h_{n}L_{n}+H_{n-1}V_{n-1}-H_{n}V_{n}} , where h {\displaystyle h} is the enthalpy of the liquid and H {\displaystyle H} is the enthalpy of the vapour
By substituting the mass balance equation in above equation we get the following expression:
V n = [ h n + 1 L n + 1 + H n − 1 V n − 1 − ( L n + 1 + V n − 1 ) h n ] / ( H n − h n ) {\displaystyle V_{n}=[h_{n+1}L_{n+1}+H_{n-1}V_{n-1}-(L_{n+1}+V_{n-1})h_{n}]/(H_{n}-h_{n})} | https://en.wikipedia.org/wiki/Modelling_Condensate_Distillation_Coloumn |
Modelling and Simulation in Materials Science and Engineering is a peer-reviewed scientific journal published by the IOP Publishing eight times per year.
The journal covers computational materials science including properties, structure, and behavior of all classes of materials at scales from the atomic to the macroscopic . This includes electronic structure/properties of materials determined by ab initio and/or semi-empirical methods, atomic level properties of materials, microstructural level phenomena, continuum-level modelling pertaining to material behaviour, and modelling behaviour in service. Mechanical, microstructural, electronic, chemical, biological, and optical properties of materials are also of interest.
The editors-in-chief is Javier Llorca (Polytechnic University of Madrid & IMDEA Materials Institute, Spain).
The journal is abstracted and indexed by: | https://en.wikipedia.org/wiki/Modelling_and_Simulation_in_Materials_Science_and_Engineering |
Modelling biological systems is a significant task of systems biology and mathematical biology . [ a ] Computational systems biology [ b ] [ 1 ] aims to develop and use efficient algorithms , data structures , visualization and communication tools with the goal of computer modelling of biological systems. It involves the use of computer simulations of biological systems, including cellular subsystems (such as the networks of metabolites and enzymes which comprise metabolism , signal transduction pathways and gene regulatory networks ), to both analyze and visualize the complex connections of these cellular processes. [ 2 ]
An unexpected emergent property of a complex system may be a result of the interplay of the cause-and-effect among simpler, integrated parts (see biological organisation ). Biological systems manifest many important examples of emergent properties in the complex interplay of components. Traditional study of biological systems requires reductive methods in which quantities of data are gathered by category, such as concentration over time in response to a certain stimulus. Computers are critical to analysis and modelling of these data. The goal is to create accurate real-time models of a system's response to environmental and internal stimuli, such as a model of a cancer cell in order to find weaknesses in its signalling pathways, or modelling of ion channel mutations to see effects on cardiomyocytes and in turn, the function of a beating heart.
By far the most widely accepted standard format for storing and exchanging models in the field is the Systems Biology Markup Language (SBML) . [ 3 ] The SBML.org website includes a guide to many important software packages used in computational systems biology. A large number of models encoded in SBML can be retrieved from BioModels . Other markup languages with different emphases include BioPAX , CellML and MorpheusML . [ 4 ]
Creating a cellular model has been a particularly challenging task of systems biology and mathematical biology . It involves the use of computer simulations of the many cellular subsystems such as the networks of metabolites , enzymes which comprise metabolism and transcription , translation , regulation and induction of gene regulatory networks. [ 5 ]
The complex network of biochemical reaction/transport processes and their spatial organization make the development of a predictive model of a living cell a grand challenge for the 21st century, listed as such by the National Science Foundation (NSF) in 2006. [ 6 ]
A whole cell computational model for the bacterium Mycoplasma genitalium , including all its 525 genes, gene products, and their interactions, was built by scientists from Stanford University and the J. Craig Venter Institute and published on 20 July 2012 in Cell. [ 7 ]
A dynamic computer model of intracellular signaling was the basis for Merrimack Pharmaceuticals to discover the target for their cancer medicine MM-111. [ 8 ]
Membrane computing is the task of modelling specifically a cell membrane .
An open source simulation of C. elegans at the cellular level is being pursued by the OpenWorm community. So far the physics engine Gepetto has been built and models of the neural connectome and a muscle cell have been created in the NeuroML format. [ 9 ]
Protein structure prediction is the prediction of the three-dimensional structure of a protein from its amino acid sequence—that is, the prediction of a protein's tertiary structure from its primary structure . It is one of the most important goals pursued by bioinformatics and theoretical chemistry . Protein structure prediction is of high importance in medicine (for example, in drug design ) and biotechnology (for example, in the design of novel enzymes ). Every two years, the performance of current methods is assessed in the CASP experiment.
The Blue Brain Project is an attempt to create a synthetic brain by reverse-engineering the mammalian brain down to the molecular level. The aim of this project, founded in May 2005 by the Brain and Mind Institute of the École Polytechnique in Lausanne , Switzerland, is to study the brain's architectural and functional principles. The project is headed by the Institute's director, Henry Markram. Using a Blue Gene supercomputer running Michael Hines's NEURON software , the simulation does not consist simply of an artificial neural network , but involves a partially biologically realistic model of neurons . [ 10 ] [ 11 ] It is hoped by its proponents that it will eventually shed light on the nature of consciousness .
There are a number of sub-projects, including the Cajal Blue Brain , coordinated by the Supercomputing and Visualization Center of Madrid (CeSViMa), and others run by universities and independent laboratories in the UK, U.S., and Israel. The Human Brain Project builds on the work of the Blue Brain Project. [ 12 ] [ 13 ] It is one of six pilot projects in the Future Emerging Technologies Research Program of the European Commission, [ 14 ] competing for a billion euro funding.
The last decade has seen the emergence of a growing number of simulations of the immune system. [ 15 ] [ 16 ]
The Virtual Liver project is a 43 million euro research program funded by the German Government, made up of seventy research group distributed across Germany. The goal is to produce a virtual liver, a dynamic mathematical model that represents human liver physiology , morphology and function. [ 17 ]
Electronic trees (e-trees) usually use L-systems to simulate growth. L-systems are very important in the field of complexity science and A-life .
A universally accepted system for describing changes in plant morphology at the cellular or modular level has yet to be devised. [ 18 ] The most widely implemented tree generating algorithms are described in the papers "Creation and Rendering of Realistic Trees" and Real-Time Tree Rendering .
Ecosystem models are mathematical representations of ecosystems . Typically they simplify complex foodwebs down to their major components or trophic levels , and quantify these as either numbers of organisms , biomass or the inventory / concentration of some pertinent chemical element (for instance, carbon or a nutrient species such as nitrogen or phosphorus ).
The purpose of models in ecotoxicology is the understanding, simulation and prediction of effects caused by toxicants in the environment. Most current models describe effects on one of many different levels of biological organization (e.g. organisms or populations). A challenge is the development of models that predict effects across biological scales. Ecotoxicology and models discusses some types of ecotoxicological models and provides links to many others.
It is possible to model the progress of most infectious diseases mathematically to discover the likely outcome of an epidemic or to help manage them by vaccination . This field tries to find parameters for various infectious diseases and to use those parameters to make useful calculations about the effects of a mass vaccination programme. | https://en.wikipedia.org/wiki/Modelling_biological_systems |
Models as Mediators: Perspectives on Natural and Social Science [ 1 ] is a multi-author book edited by Mary S. Morgan and Margaret Morrison and published in 1999 by Cambridge University Press .
The volume looks at the working of models in the social and natural sciences, with a focus in economics and physics . [ 2 ] The book illustrate the concept of models as mediating between theory and the world and yet independent from both. [ 2 ] It offers a historical and philosophical discussion of what models are and of what models do, with detailed examples written by the same editors and scholars such as Ursula Klein , Marcel Boumans, R.I.G. Hughes, Mauricio Suárez , Geert Reuten , Nancy Cartwright , Adrienne van den Boogard and Stephan Hartmann .
Models and theories are related, so that an evolution in the perception of what a scientific theory is also chances the perception of what models are. [ 2 ] The concept of scientific theory has moved from the ' received view ' - whereby a theory can be seen as an axiomatic system to be dealt with in the context of the discipline of logic , to a new conception of theory as framed in therms of semantics, whereby models acquire a new prominence as 'fundamental unit of scientific theorizing, theories themselves being families of models'. [ 2 ] Most of the examples in the book are quite articulated and pertinent to either physics or economics ,
with one, offered by historian of science Ursula Klein , in Chemistry .
The introduction written by Margaret Morrison and Mary S. Morgan discusses the syntactic versus semantic view of theories and how these consider models. The second chapter by the same authors entitled 'Models as mediating instruments' — a key chapter in the economy of the volume, [ 2 ] introduces the unifying theme of the work: the concept of models as mediators between theory and world. Models may represent 'some aspect of our theories about the world'. [ 1 ] {rp|11}. While they may act as mediators between theory and world, they are situated outside the theory-world axis'. [ 1 ] {rp|11}
The chapter also details what has been called a 'functionalist' [ 2 ] articulation of the difference between models and theory, namely in four functions served by models: the first is how they are constructed deriving elements from one or more theories, other models, and the world. The second function is the use of models as instruments for the exploration and development of theory and or for the design of better experiments. The third is their use to 'represent' beyond what a theory alone can offer. The fourth function is the capacity of the model to enhance learning - though this function is also present in the preceding three steps. [ 1 ] : 10–37 Morrison and Morgan emphasize that models can thus be regarded as 'technologies for investigation' — one learns by manipulating and playing with them. [ 3 ] Models
Chapter 3 by Margaret Morrison, entitled 'Models as autonomous agents', elaborates on the autonomy of models with example from physics. Chapter 4 'Built in justification' is from Marcel Boumans. Using examples from economics Boumans shows that models 'integrate a broader range of ingredients than only theory and data': these are theoretical notions, mathematical concepts and techniques, stylized facts, empirical data, policy views, analogies, metaphors. [ 1 ] : 93 [ 3 ]
Chapter 5 from R.I.G. Hughes, discusses how the development of computers and simulation changed the relation between models and theory. It was the use of computer simulation that permitted the Ising model to be accepted. In Chapter 6 by Ursula Klein chemical formulae as developed by Jöns Jacob Berzelius in 1813 are presented as 'paper tools' permitting representation and the construction of models. In Chapter 6 Mauricio Suárez discusses how an essential feature of models as mediators is to possibly replace the phenomenon itself in becoming the focus of scientific research, and illustrate this feature with an example from superconductivity in physics.
Chapter 8 from Geert Reuten , 'Knife edge caricature modelling: the case of Marx's reproduction schema' is a 'detailed historical reconstruction' or exegesis of Marxian economics , with little general theory of models. [ 2 ] Chapter 9 from Nancy Cartwright 'Models and the limit of theory: quantum Hamiltonians and the BCS model of superconductivity' distinguishes what she call representative models that are accurate to the phenomena from more theory-internal models, models that bridge element of the theory with one another, and that are named interpretative models, especially in fields such as quantum mechanics, quantum electrodynamics, classical mechanics and classical electromagnetic theory. [ 1 ] : 242–243 For example, an abstract concept such as force 'can only exist in particular mechanical models'. [ 1 ] : 257
Adrienne van den Boogard notes that 'the model is also a social and political device', [ 1 ] : 283 and shows how institutions can be 'influenced (but were also conditioned by) the usage of different models and statistical techniques.' [ 2 ] van den Boogard provides an illustration based on economic models and index numbers developed in the Netherlands. In discussing for example unemployment statistics:
For Stephan Hartmann empirical adequacy and logical consistency are not the only criteria of models acceptance. [ 1 ] : 326 The story told by a model matters to its acceptability, and thus to its function and quality. The argument is developed in the final chapter of the volume entitled 'Models and stories in hadron physics'.
One review notes that while the title appears to promise a unified theory of models the chapters point instead to a universe of possible ways to characterize the nature and use of models. [ 2 ] Acting as mediators, models are partly independent from both theory and world, and this independence, that ensures the versatility of 'models as autonomous agents', [ 1 ] : 10 is also the reason why they resist an attempt to a unified 'theory of models'. [ 2 ]
The 'functionalist' — rather than philosophical approach of the work, i.e. more about what a model does than what a model is, leaves several questions unanswered (or answered in different ways in different chapters). [ 2 ] Furthermore, the examples are quite technical and detailed, not easy to read for the non initiated. [ 2 ] Important epistemological questions left open concerns for example why individual models are constructed with
the particular degrees of independence from theory and experiment. [ 4 ] Being focused on models in physics, chemistry and economics, the book leaves out biological models. [ 3 ]
The book lays the basis for a research programme for studying models from the point of view of scientific practice providing 'a potential bridge between philosophical theorising and the more practice-oriented approach of STS '. [ 5 ] | https://en.wikipedia.org/wiki/Models_as_Mediators |
A number of different Markov models of DNA sequence evolution have been proposed. [ 1 ] These substitution models differ in terms of the parameters used to describe the rates at which one nucleotide replaces another during evolution. These models are frequently used in molecular phylogenetic analyses . In particular, they are used during the calculation of likelihood of a tree (in Bayesian and maximum likelihood approaches to tree estimation) and they are used to estimate the evolutionary distance between sequences from the observed differences between the sequences.
These models are phenomenological descriptions of the evolution of DNA as a string of four discrete states. These Markov models do not explicitly depict the mechanism of mutation nor the action of natural selection. Rather they describe the relative rates of different changes. For example, mutational biases and purifying selection favoring conservative changes are probably both responsible for the relatively high rate of transitions compared to transversions in evolving sequences. However, the Kimura (K80) model described below only attempts to capture the effect of both forces in a parameter that reflects the relative rate of transitions to transversions.
Evolutionary analyses of sequences are conducted on a wide variety of time scales. Thus, it is convenient to express these models in terms of the instantaneous rates of change between different states (the Q matrices below). If we are given a starting (ancestral) state at one position, the model's Q matrix and a branch length expressing the expected number of changes to have occurred since the ancestor, then we can derive the probability of the descendant sequence having each of the four states. The mathematical details of this transformation from rate-matrix to probability matrix are described in the mathematics of substitution models section of the substitution model page. By expressing models in terms of the instantaneous rates of change we can avoid estimating a large numbers of parameters for each branch on a phylogenetic tree (or each comparison if the analysis involves many pairwise sequence comparisons).
The models described on this page describe the evolution of a single site within a set of sequences. They are often used for analyzing the evolution of an entire locus by making the simplifying assumption that different sites evolve independently and are identically distributed . This assumption may be justifiable if the sites can be assumed to be evolving neutrally . If the primary effect of natural selection on the evolution of the sequences is to constrain some sites, then models of among-site rate-heterogeneity can be used. This approach allows one to estimate only one matrix of relative rates of substitution, and another set of parameters describing the variance in the total rate of substitution across sites.
Continuous-time Markov chains have the usual transition matrices
which are, in addition, parameterized by time, t {\displaystyle t} . Specifically, if E 1 , E 2 , E 3 , E 4 {\displaystyle E_{1},E_{2},E_{3},E_{4}} are the states, then the transition matrix
Example: We would like to model the substitution process in DNA sequences ( i.e. Jukes–Cantor , Kimura, etc. ) in a continuous-time fashion. The corresponding transition matrices will look like:
where the top-left and bottom-right 2 × 2 blocks correspond to transition probabilities and the top-right and bottom-left 2 × 2 blocks corresponds to transversion probabilities .
Assumption: If at some time t 0 {\displaystyle t_{0}} , the Markov chain is in state E i {\displaystyle E_{i}} , then the probability that at time t 0 + t {\displaystyle t_{0}+t} , it will be in state E j {\displaystyle E_{j}} depends only upon i {\displaystyle i} , j {\displaystyle j} and t {\displaystyle t} . This then allows us to write that probability as p i j ( t ) {\displaystyle p_{ij}(t)} .
Theorem: Continuous-time transition matrices satisfy:
Note: There is here a possible confusion between two meanings of the word transition . (i) In the context of Markov chains , transition is the general term for the change between two states. (ii) In the context of nucleotide changes in DNA sequences , transition is a specific term for the exchange between either the two purines (A ↔ G) or the two pyrimidines (C ↔ T) (for additional details, see the article about transitions in genetics ). By contrast, an exchange between one purine and one pyrimidine is called a transversion .
Consider a DNA sequence of fixed length m evolving in time by base replacement. Assume that the processes followed by the m sites are Markovian independent, identically distributed and that the process is constant over time. For a particular site, let
be the set of possible states for the site, and
their respective probabilities at time t {\displaystyle t} . For two distinct x , y ∈ E {\displaystyle x,y\in {\mathcal {E}}} , let μ x y {\displaystyle \mu _{xy}\ } be the transition rate from state x {\displaystyle x} to state y {\displaystyle y} . Similarly, for any x {\displaystyle x} , let the total rate of change from x {\displaystyle x} be
The changes in the probability distribution p A ( t ) {\displaystyle p_{A}(t)} for small increments of time Δ t {\displaystyle \Delta t} are given by
In other words, (in frequentist language), the frequency of A {\displaystyle A} 's at time t + Δ t {\displaystyle t+\Delta t} is equal to the frequency at time t {\displaystyle t} minus the frequency of the lost A {\displaystyle A} 's plus the frequency of the newly created A {\displaystyle A} 's.
Similarly for the probabilities p G ( t ) {\displaystyle p_{G}(t)} , p C ( t ) {\displaystyle p_{C}(t)} and p T ( t ) {\displaystyle p_{T}(t)} . These equations can be written compactly as
where
is known as the rate matrix . Note that, by definition, the sum of the entries in each row of Q {\displaystyle Q} is equal to zero. It follows that
For a stationary process , where Q {\displaystyle Q} does not depend on time t , this differential equation can be solved. First,
where exp ( t Q ) {\displaystyle \exp(tQ)} denotes the exponential of the matrix t Q {\displaystyle tQ} . As a result,
If the Markov chain is irreducible , i.e. if it is always possible to go from a state x {\displaystyle x} to a state y {\displaystyle y} (possibly in several steps), then it is also ergodic . As a result, it has a unique stationary distribution π = { π x , x ∈ E } {\displaystyle {\boldsymbol {\pi }}=\{\pi _{x},\,x\in {\mathcal {E}}\}} , where π x {\displaystyle \pi _{x}} corresponds to the proportion of time spent in state x {\displaystyle x} after the Markov chain has run for an infinite amount of time. In DNA evolution, under the assumption of a common process for each site, the stationary frequencies π A , π G , π C , π T {\displaystyle \pi _{A},\,\pi _{G},\,\pi _{C},\,\pi _{T}} correspond to equilibrium base compositions. Indeed, note that since the stationary distribution π {\displaystyle {\boldsymbol {\pi }}} satisfies π Q = 0 {\displaystyle {\boldsymbol {\pi }}Q=0} , we see that when the current distribution p ( t ) {\displaystyle \mathbf {p} (t)} is the stationary distribution π {\displaystyle {\boldsymbol {\pi }}} we have
In other words, the frequencies of p A ( t ) , p G ( t ) , p C ( t ) , p T ( t ) {\displaystyle p_{A}(t),\,p_{G}(t),\,p_{C}(t),\,p_{T}(t)} do not change.
Definition : A stationary Markov process is time reversible if (in the steady state) the amount of change from state x {\displaystyle x\ } to y {\displaystyle y\ } is equal to the amount of change from y {\displaystyle y\ } to x {\displaystyle x\ } , (although the two states may occur with different frequencies). This means that:
Not all stationary processes are reversible, however, most commonly used DNA evolution models assume time reversibility, which is considered to be a reasonable assumption.
Under the time reversibility assumption, let s x y = μ x y / π y {\displaystyle s_{xy}=\mu _{xy}/\pi _{y}\ } , then it is easy to see that:
Definition The symmetric term s x y {\displaystyle s_{xy}\ } is called the exchangeability between states x {\displaystyle x\ } and y {\displaystyle y\ } . In other words, s x y {\displaystyle s_{xy}\ } is the fraction of the frequency of state x {\displaystyle x\ } that is the result of transitions from state y {\displaystyle y\ } to state x {\displaystyle x\ } .
Corollary The 12 off-diagonal entries of the rate matrix, Q {\displaystyle Q\ } (note the off-diagonal entries determine the diagonal entries, since the rows of Q {\displaystyle Q\ } sum to zero) can be completely determined by 9 numbers; these are: 6 exchangeability terms and 3 stationary frequencies π x {\displaystyle \pi _{x}\ } , (since the stationary frequencies sum to 1).
By comparing extant sequences, one can determine the amount of sequence divergence. This raw measurement of divergence provides information about the number of changes that have occurred along the path separating the sequences. The simple count of differences (the Hamming distance ) between sequences will often underestimate the number of substitution because of multiple hits (see homoplasy ). Trying to estimate the exact number of changes that have occurred is difficult, and usually not necessary. Instead, branch lengths (and path lengths) in phylogenetic analyses are usually expressed in the expected number of changes per site. The path length is the product of the duration of the path in time and the mean rate of substitutions. While their product can be estimated, the rate and time are not identifiable from sequence divergence.
The descriptions of rate matrices on this page accurately reflect the relative magnitude of different substitutions, but these rate matrices are not scaled such that a branch length of 1 yields one expected change. This scaling can be accomplished by multiplying every element of the matrix by the same factor, or simply by scaling the branch lengths. If we use the β to denote the scaling factor, and ν to denote the branch length measured in the expected number of substitutions per site then βν is used in the transition probability formulae below in place of μ t . Note that ν is a parameter to be estimated from data, and is referred to as the branch length, while β is simply a number that can be calculated from the rate matrix (it is not a separate free parameter).
The value of β can be found by forcing the expected rate of flux of states to 1. The diagonal entries of the rate-matrix (the Q matrix) represent -1 times the rate of leaving each state. For time-reversible models, we know the equilibrium state frequencies (these are simply the π i parameter value for state i ). Thus we can find the expected rate of change by calculating the sum of flux out of each state weighted by the proportion of sites that are expected to be in that class. Setting β to be the reciprocal of this sum will guarantee that scaled process has an expected flux of 1:
For example, in the Jukes–Cantor, the scaling factor would be 4/(3μ) because the rate of leaving each state is 3μ/4 .
JC69, the Jukes and Cantor 1969 model, [ 2 ] is the simplest substitution model . There are several assumptions. It assumes equal base frequencies ( π A = π G = π C = π T = 1 4 ) {\displaystyle \left(\pi _{A}=\pi _{G}=\pi _{C}=\pi _{T}={1 \over 4}\right)} and equal mutation rates . The only parameter of this model is therefore μ {\displaystyle \mu } , the overall substitution rate. As previously mentioned, this variable becomes a constant when we normalize the mean-rate to 1.
When branch length, ν {\displaystyle \nu } , is measured in the expected number of changes per site then:
It is worth noticing that ν = 3 4 t μ = ( μ 4 + μ 4 + μ 4 ) t {\displaystyle \nu ={3 \over 4}t\mu =({\mu \over 4}+{\mu \over 4}+{\mu \over 4})t} what stands for sum of any column (or row) of matrix Q {\displaystyle Q} multiplied by time and thus means expected number of substitutions in time t {\displaystyle t} (branch duration) for each particular site (per site) when the rate of substitution equals μ {\displaystyle \mu } .
Given the proportion p {\displaystyle p} of sites that differ between the two sequences the Jukes–Cantor estimate of the evolutionary distance (in terms of the expected number of changes) between two sequences is given by
The p {\displaystyle p} in this formula is frequently referred to as the p {\displaystyle p} -distance. It is a sufficient statistic for calculating the Jukes–Cantor distance correction, but is not sufficient for the calculation of the evolutionary distance under the more complex models that follow (also note that p {\displaystyle p} used in subsequent formulae is not identical to the " p {\displaystyle p} -distance").
K80, the Kimura 1980 model, [ 3 ] often referred to as Kimura's two parameter model (or the K2P model ), distinguishes between transitions ( A ↔ G {\displaystyle A\leftrightarrow G} , i.e. from purine to purine, or C ↔ T {\displaystyle C\leftrightarrow T} , i.e. from pyrimidine to pyrimidine) and transversions (from purine to pyrimidine or vice versa). In Kimura's original description of the model the α and β were used to denote the rates of these types of substitutions, but it is now more common to set the rate of transversions to 1 and use κ to denote the transition/transversion rate ratio (as is done below). The K80 model assumes that all of the bases are equally frequent ( π A = π G = π C = π T = 1 4 {\displaystyle \pi _{A}=\pi _{G}=\pi _{C}=\pi _{T}={1 \over 4}} ).
Rate matrix Q = ( ∗ κ 1 1 κ ∗ 1 1 1 1 ∗ κ 1 1 κ ∗ ) {\displaystyle Q={\begin{pmatrix}{*}&{\kappa }&{1}&{1}\\{\kappa }&{*}&{1}&{1}\\{1}&{1}&{*}&{\kappa }\\{1}&{1}&{\kappa }&{*}\end{pmatrix}}} with columns corresponding to A {\displaystyle A} , G {\displaystyle G} , C {\displaystyle C} , and T {\displaystyle T} , respectively.
The Kimura two-parameter distance is given by:
where p is the proportion of sites that show transitional differences and q is the proportion of sites that show transversional differences.
K81, the Kimura 1981 model, [ 4 ] often called Kimura's three parameter model (K3P model) or the Kimura three substitution type (K3ST) model, has distinct rates for transitions and two distinct types of transversions . The two transversion types are those that conserve the weak/strong properties of the nucleotides (i.e., A ↔ T {\displaystyle A\leftrightarrow T} and C ↔ G {\displaystyle C\leftrightarrow G} , denoted by symbol γ {\displaystyle \gamma } [ 4 ] ) and those that conserve the amino/keto properties of the nucleotides (i.e., A ↔ C {\displaystyle A\leftrightarrow C} and G ↔ T {\displaystyle G\leftrightarrow T} , denoted by symbol β {\displaystyle \beta } [ 4 ] ). The K81 model assumes that all equilibrium base frequencies are equal (i.e., π A = π G = π C = π T = 0.25 {\displaystyle \pi _{A}=\pi _{G}=\pi _{C}=\pi _{T}=0.25} ).
Rate matrix Q = ( ∗ α β γ α ∗ γ β β γ ∗ α γ β α ∗ ) {\displaystyle Q={\begin{pmatrix}{*}&{\alpha }&{\beta }&{\gamma }\\{\alpha }&{*}&{\gamma }&{\beta }\\{\beta }&{\gamma }&{*}&{\alpha }\\{\gamma }&{\beta }&{\alpha }&{*}\end{pmatrix}}} with columns corresponding to A {\displaystyle A} , G {\displaystyle G} , C {\displaystyle C} , and T {\displaystyle T} , respectively.
The K81 model is used much less often than the K80 (K2P) model for distance estimation and it is seldom the best-fitting model in maximum likelihood phylogenetics. Despite these facts, the K81 model has continued to be studied in the context of mathematical phylogenetics. [ 5 ] [ 6 ] [ 7 ] One important property is the ability to perform a Hadamard transform assuming the site patterns were generated on a tree with nucleotides evolving under the K81 model. [ 8 ] [ 9 ] [ 10 ]
When used in the context of phylogenetics the Hadamard transform provides an elegant and fully invertible means to calculate expected site pattern frequencies given a set of branch lengths (or vice versa). Unlike many maximum likelihood calculations, the relative values for α {\displaystyle \alpha } , β {\displaystyle \beta } , and γ {\displaystyle \gamma } can vary across branches and the Hadamard transform can even provide evidence that the data do not fit a tree. The Hadamard transform can also be combined with a wide variety of methods to accommodate among-sites rate heterogeneity, [ 11 ] using continuous distributions rather than the discrete approximations typically used in maximum likelihood phylogenetics [ 12 ] (although one must sacrifice the invertibility of the Hadamard transform to use certain among-sites rate heterogeneity distributions [ 11 ] ).
F81, the Felsenstein's 1981 model, [ 13 ] is an extension of the JC69 model in which base frequencies are allowed to vary from 0.25 ( π A ≠ π G ≠ π C ≠ π T {\displaystyle \pi _{A}\neq \pi _{G}\neq \pi _{C}\neq \pi _{T}} )
Rate matrix:
When branch length, ν, is measured in the expected number of changes per site then:
HKY85, the Hasegawa, Kishino and Yano 1985 model, [ 14 ] can be thought of as combining the extensions made in the Kimura80 and Felsenstein81 models. Namely, it distinguishes between the rate of transitions and transversions (using the κ parameter), and it allows unequal base frequencies ( π A ≠ π G ≠ π C ≠ π T {\displaystyle \pi _{A}\neq \pi _{G}\neq \pi _{C}\neq \pi _{T}} ). [ Felsenstein described a similar (but not equivalent) model in 1984 using a different parameterization; [ 15 ] that latter model is referred to as the F84 model. [ 16 ] ]
Rate matrix Q = ( ∗ κ π G π C π T κ π A ∗ π C π T π A π G ∗ κ π T π A π G κ π C ∗ ) {\displaystyle Q={\begin{pmatrix}{*}&{\kappa \pi _{G}}&{\pi _{C}}&{\pi _{T}}\\{\kappa \pi _{A}}&{*}&{\pi _{C}}&{\pi _{T}}\\{\pi _{A}}&{\pi _{G}}&{*}&{\kappa \pi _{T}}\\{\pi _{A}}&{\pi _{G}}&{\kappa \pi _{C}}&{*}\end{pmatrix}}}
If we express the branch length, ν in terms of the expected number of changes per site then:
and formula for the other combinations of states can be obtained by substituting in the appropriate base frequencies.
T92, the Tamura 1992 model, [ 17 ] is a mathematical method developed to estimate the number of nucleotide substitutions per site between two DNA sequences, by extending Kimura's (1980) two-parameter method to the case where a G+C content bias exists. This method will be useful when there are strong transition-transversion and G+C-content biases, as in the case of Drosophila mitochondrial DNA. [ 17 ]
T92 involves a single, compound base frequency parameter θ ∈ ( 0 , 1 ) {\displaystyle \theta \in (0,1)} (also noted π G C {\displaystyle \pi _{GC}} ) = π G + π C = 1 − ( π A + π T ) {\displaystyle =\pi _{G}+\pi _{C}=1-(\pi _{A}+\pi _{T})}
As T92 echoes the Chargaff's second parity rule — pairing nucleotides do have the same frequency on a single DNA strand, G and C on the one hand, and A and T on the other hand — it follows that the four base frequences can be expressed as a function of π G C {\displaystyle \pi _{GC}}
π G = π C = π G C 2 {\displaystyle \pi _{G}=\pi _{C}={\pi _{GC} \over 2}} and π A = π T = ( 1 − π G C ) 2 {\displaystyle \pi _{A}=\pi _{T}={(1-\pi _{GC}) \over 2}}
Rate matrix Q = ( ∗ κ π G C / 2 π G C / 2 ( 1 − π G C ) / 2 κ ( 1 − π G C ) / 2 ∗ π G C / 2 ( 1 − π G C ) / 2 ( 1 − π G C ) / 2 π G C / 2 ∗ κ ( 1 − π G C ) / 2 ( 1 − π G C ) / 2 π G C / 2 κ π G C / 2 ∗ ) {\displaystyle Q={\begin{pmatrix}{*}&{\kappa \pi _{GC}/2}&{\pi _{GC}/2}&{(1-\pi _{GC})/2}\\{\kappa (1-\pi _{GC})/2}&{*}&{\pi _{GC}/2}&{(1-\pi _{GC})/2}\\{(1-\pi _{GC})/2}&{\pi _{GC}/2}&{*}&{\kappa (1-\pi _{GC})/2}\\{(1-\pi _{GC})/2}&{\pi _{GC}/2}&{\kappa \pi _{GC}/2}&{*}\end{pmatrix}}}
The evolutionary distance between two DNA sequences according to this model is given by
where h = 2 θ ( 1 − θ ) {\displaystyle h=2\theta (1-\theta )} and θ {\displaystyle \theta } is the G+C content ( π G C = π G + π C {\displaystyle \pi _{GC}=\pi _{G}+\pi _{C}} ).
TN93, the Tamura and Nei 1993 model, [ 18 ] distinguishes between the two different types of transition ; i.e. ( A ↔ G {\displaystyle A\leftrightarrow G} ) is allowed to have a different rate to ( C ↔ T {\displaystyle C\leftrightarrow T} ). Transversions are all assumed to occur at the same rate, but that rate is allowed to be different from both of the rates for transitions.
TN93 also allows unequal base frequencies ( π A ≠ π G ≠ π C ≠ π T {\displaystyle \pi _{A}\neq \pi _{G}\neq \pi _{C}\neq \pi _{T}} ).
Rate matrix Q = ( ∗ κ 1 π G π C π T κ 1 π A ∗ π C π T π A π G ∗ κ 2 π T π A π G κ 2 π C ∗ ) {\displaystyle Q={\begin{pmatrix}{*}&{\kappa _{1}\pi _{G}}&{\pi _{C}}&{\pi _{T}}\\{\kappa _{1}\pi _{A}}&{*}&{\pi _{C}}&{\pi _{T}}\\{\pi _{A}}&{\pi _{G}}&{*}&{\kappa _{2}\pi _{T}}\\{\pi _{A}}&{\pi _{G}}&{\kappa _{2}\pi _{C}}&{*}\end{pmatrix}}}
GTR, the Generalised time-reversible model of Tavaré 1986, [ 19 ] is the most general neutral, independent, finite-sites, time-reversible model possible. It was first described in a general form by Simon Tavaré in 1986. [ 19 ]
GTR parameters consist of an equilibrium base frequency vector, Π = ( π A , π G , π C , π T ) {\displaystyle \Pi =(\pi _{A},\pi _{G},\pi _{C},\pi _{T})} , giving the frequency at which each base occurs at each site, and the rate matrix
Where
α = r ( A → G ) = r ( G → A ) β = r ( A → C ) = r ( C → A ) γ = r ( A → T ) = r ( T → A ) δ = r ( G → C ) = r ( C → G ) ϵ = r ( G → T ) = r ( T → G ) η = r ( C → T ) = r ( T → C ) {\displaystyle {\begin{aligned}\alpha =r(A\rightarrow G)=r(G\rightarrow A)\\\beta =r(A\rightarrow C)=r(C\rightarrow A)\\\gamma =r(A\rightarrow T)=r(T\rightarrow A)\\\delta =r(G\rightarrow C)=r(C\rightarrow G)\\\epsilon =r(G\rightarrow T)=r(T\rightarrow G)\\\eta =r(C\rightarrow T)=r(T\rightarrow C)\end{aligned}}}
are the transition rate parameters.
Therefore, GTR (for four characters, as is often the case in phylogenetics) requires 6 substitution rate parameters, as well as 4 equilibrium base frequency parameters. However, this is usually eliminated down to 9 parameters plus μ {\displaystyle \mu } , the overall number of substitutions per unit time. When measuring time in substitutions ( μ {\displaystyle \mu } =1) only 8 free parameters remain.
In general, to compute the number of parameters, one must count the number of entries above the diagonal in the matrix, i.e. for n trait values per site n 2 − n 2 {\displaystyle {{n^{2}-n} \over 2}} , and then add n for the equilibrium base frequencies, and subtract 1 because μ {\displaystyle \mu } is fixed. One gets
For example, for an amino acid sequence (there are 20 "standard" amino acids that make up proteins ), one would find there are 209 parameters. However, when studying coding regions of the genome, it is more common to work with a codon substitution model (a codon is three bases and codes for one amino acid in a protein). There are 4 3 = 64 {\displaystyle 4^{3}=64} codons, but the rates for transitions between codons which differ by more than one base is assumed to be zero. Hence, there are 20 × 19 × 3 2 + 64 − 1 = 633 {\displaystyle {{20\times 19\times 3} \over 2}+64-1=633} parameters. | https://en.wikipedia.org/wiki/Models_of_DNA_evolution |
Moder is a forest floor type formed under mixed-wood and pure deciduous forests . [ 1 ] [ 2 ] Moder is a kind of humus whose properties are the transition between mor humus and mull humus types. [ 3 ] [ 4 ] Moders are similar to mors as they are made up of partially to fully humified organic components accumulated on the mineral soil. Compared to mulls, moders are zoologically active. [ 5 ] In addition, moders present as in the middle of mors and mulls with a higher decomposition capacity than mull but lower than mor. [ 2 ] Moders are characterized by a slow rate of litter decomposition by litter-dwelling organisms and fungi, leading to the accumulation of organic residues. Moder humus forms share the features of the mull and mor humus forms. [ 2 ]
Moders develop in semiarid, temperate, and Mediterranean climates. Moders' chemical characteristics show low acidity, total carbon, carbon-nitrogen ratio, cation exchange capacity, and high total nitrogen and base saturation. [ 6 ] Moders have a higher availability of nutrients than mors. [ 7 ]
Morders form in deciduous forest situations when the soil has few micro-organisms, bacteria, and invertebrates, such as earthworms, to decompose the organic matter on the soil surface. The organic matter accumulation horizon could identify by capitalized letters. It is generally possible to observe three distinct "sub-layers" or horizons designated by the litter (L) *, fermentation (F) *, and humus (H) * layers. [ 2 ] [ 7 ]
"L" litter: A horizon is defined by accumulating primary leaves (and needles), twigs, and woody materials, with the original structures visible. [ 2 ]
"F" Fermentation: A horizon defined by the buildup of partially decomposed organic matter generated primarily from leaves, twigs, and woody materials. Some of the original structures are difficult to identify, and materials may have been pounded into small pieces or particles in part by soil fauna, as in a "MODER". [ 2 ]
"H" Humus: A horizon defined by the accumulation of disintegrated organic matter in which the original structures are undetectable. It differs from the "F" horizon as more humification due to organisms' actions. It can partially merge into the mineral soil such as the "MODER". [ 2 ]
The Fa horizon is used to distinguish moder. [ 6 ] These layers are mostly made up of partially decomposed plant remains broken or comminuted by soil fauna and are loosely organized rather than matted like the Fq horizons. [ 2 ] An abundance of fine roots can sometimes result in a matte appearance. A distinctive aspect of the Fa horizon and its loose nature is the abundance of soil fauna droppings, which can be seen with high magnification. [ 2 ] Centipedes, millipedes, collembola, mites, isopods, and various insect larvae all come from these droppings. [ 2 ] The fragmentation of plant residues by soil fauna facilitates a faster rate of decomposition. Bacteria, actinomycetes, and protozoa are progressively contributing to the breakdown process, although fungi continue to play an important role [ 2 ]
Soil fauna break plant residues into non-compact, loose arrangements mainly constituted of faunal droppings, which make up friable Fz horizons in moders. [ 6 ] The moder order, like mors, accumulates organic matter above the mineral horizons. [ 6 ] Fungal mycelia are reasonably expected, and plant residues may appear somewhat matted. However, many small insect droppings can still be observed. In most Moder humus types, H d horizons can be observed. [ 6 ] However, mor humus types might have similar horizons. [ 7 ] This trait is not considered diagnostic of Moders. H horizons have no distinct, differentiating characteristics that would indicate whether they are from Mors or Moders, according to the observation of humus from micro-morphology. [ 6 ]
Moder humus form may include only the Ah horizon or ectorganic horizons below end- organic horizons. [ 2 ] Fine humus elements have permeated the mineral soil on this horizon. Although soil animals may carry organic matter into the upper region of this horizon, this will only happen over a short distance. [ 2 ] The upper limit of the Ah horizon is often gradual. [ 2 ] | https://en.wikipedia.org/wiki/Moder_humus |
Modern methods of construction (MMC) is a term used mainly in the UK construction industry to refer to "smart construction" processes designed to improve upon traditional design and construction approaches by focusing on (among other things) component and process standardisation, design for manufacture and assembly ( DfMA ), prefabrication , preassembly , off-site manufacture (including modular building ) and onsite innovations such as additive manufacture ( 3D printing ). While such modern approaches may be applied to infrastructure works (bridges, tunnels, etc.) and to commercial or industrial buildings, MMC has become particularly associated with construction of residential housing. However, several specialist housing businesses established to target this market did not become commercially viable.
The MMC term started to enter common industry use in the early 2000s following the publication of the Egan Report , Rethinking Construction , in November 1998. An industry task force chaired by Sir John Egan , produced an influential report on the UK construction industry , [ 1 ] which did much to drive efficiency improvements in UK construction industry practice during the early years of the 21st century, [ 2 ] with its recommendations implemented through initiatives including the Movement for Innovation (M4I) and the Construction Best Practice Programme (CBBP). [ 3 ] However, the emergence of some non-traditional methods substantially predated Egan's report; procurement of prefabricated homes , for example, was a UK government response to housing shortages after both World Wars, the CLASP created prefabricated schools in the late 1950s, and the 1964-1970 Labour government engaged in an "Industrialised Building Drive". [ 3 ]
MMC has been repeatedly advocated in UK government construction strategy statements including the 2017 Transforming Infrastructure Performance from the Infrastructure and Projects Authority (IPA), [ 4 ] the 2019 Construction Sector Deal , [ 5 ] the Construction Playbook (2020, 2022), [ 6 ] and the IPA's 2021 TIP Roadmap to 2030 . [ 7 ] The 2022 Playbook and TIP Roadmap also encouraged procurement of construction projects based on product 'platforms' ("Platform Design for Manufacture and Assembly, PDfMA") comprising kits of parts, production processes, knowledge, people and relationships required to deliver all or part of construction projects.
The UK Government has also invested in MMC initiatives and businesses. During the 2010s, as government backing (including via Homes England ) for MMC grew, several UK companies (for example, Ilke Homes , L&G Modular Homes , House by Urban Splash , Modulous, Lighthouse and TopHat) were established to develop modular homes as an alternative to traditionally-built residences. From its Knaresborough , Yorkshire factory (opened in 2018, closed in 2023), Ilke Homes delivered two- and three-bedroom 'modular' homes that could be erected in 36 hours. [ 8 ] Homes England invested £30m in Ilke Homes in November 2019, [ 9 ] and a further £30m in September 2021. [ 10 ] Despite a further fund-raising round, raising £100m in December 2022, [ 11 ] [ 12 ] Ilke Homes went into administration on 30 June 2023, [ 13 ] [ 14 ] with most of the company's 1,150 staff made redundant, [ 15 ] and creditors owed £320m, [ 16 ] including £68m owed to Homes England. [ 17 ] L&G Modular Homes halted production in May 2023, blaming planning delays and the COVID-19 pandemic for its failure, [ 18 ] [ 19 ] with the enterprise incurring total losses over seven years of £295m. [ 20 ]
In November 2023, Homes England loaned £15m to TopHat, another loss-making MMC housebuilder, to fund construction of a factory in Corby ; [ 21 ] in March 2024, the factory's opening was postponed [ 22 ] and the company announced 70 redundancies. [ 23 ] MD Andrew Shepherd left TopHat in May 2024. [ 24 ] In August 2024, TopHat faced a winding-up hearing after a petition was filed by Harworth, a Yorkshire based property developer, [ 25 ] but settled out of court. [ 26 ] Also in August 2024, housebuilder Persimmon wrote off a £25m investment in TopHat it made in 2023, due to "a re-assessment of risks within the modular build sector". [ 27 ] In October 2024, having accumulated a loss of around £87m since 2016, TopHat confirmed it was winding down its Derby factory operations, with most staff being made redundant. [ 28 ] In its penultimate year of trading, TopHat made an operating loss of £46m on turnover of less than £11m. [ 29 ]
In January 2024, following the high-profile failures of Ilke Homes, L&G Modular and House by Urban Splash during 2022 and 2023, the House of Lords Built Environment Committee highlighted that the UK Government needed to take a more coherent approach to addressing barriers affecting adoption of MMC: "If the Government wants the sector to be a success, it needs to take a step back, acquire a better understanding of how it works and the help that it needs, set achievable goals and develop a coherent strategy." [ 30 ] [ 31 ] [ 32 ] Modulous and Lighthouse went into administration in January and March 2024 respectively. [ 33 ] [ 34 ] In late March 2024, housing minister Lee Rowley told the Lords Committee that the government would be reviewing its MMC policies in light of the crisis in the volumetric house-building sector. He promised "a full update in late spring once we have undertaken further detailed work with the sector". [ 35 ] Following the July 2024 general election , a House of Lords Library report was published in August 2024 ahead of a scheduled debate in September 2024; [ 36 ] it said the new Labour government would publish a new long-term housing strategy "in the coming months". [ 36 ]
MMC refers to a variety of off-site construction methods: [ 37 ]
During the early 2000s, the Housing Corporation classified a number of offsite manufacturing initiatives. Its classification included volumetric construction (e.g. bathroom and kitchen pods), panellised construction systems, hybrid construction (volumetric units integrated with panellised systems), sub-assemblies and components (e.g. floor and roof cassettes, wiring looms, pre-fabricated plumbing), and site-based MMC approaches. [ 3 ]
In 2017, the IPA's Transforming Infrastructure Performance committed the government to "smart construction, using modern methods, including offsite manufacture". It said: "Smart construction (or 'modern methods of construction') offers the opportunity to transition from traditional construction to manufacturing, and unlock the benefits from standard, repeatable processes with components manufactured offsite." [ 4 ]
Recognising that terms such as MMC, prefabrication and off-site construction were prone to different interpretations, a Modern Methods of Construction working group was established by the UK's Ministry for Housing, Communities and Local Government (MHCLG) to develop a definition framework. With inputs from Build Offsite , Homes England , National House Building Council (NHBC) and Royal Institution of Chartered Surveyors (RICS), the Modern Methods of Construction (MMC) framework definition was published in 2019. [ 38 ] It was intended to regularise and refine the term MMC by defining the broad spectrum of innovative construction techniques being applied, enabling clients, advisors, lenders and investors, warranty providers, building insurers and valuers to all build a common understanding of the different forms of MMC use. [ 38 ] It divides factory-produced systems into seven categories:
Another approach to defining and measuring use of MMC was proposed by Mark Farmer in the 2016 Farmer Review . The term Pre-Manufactured Value (PMV) [ 40 ] identifies the percentage of a construction project's value which is manufactured before installation at the final workface. This measure has been adopted by parts of the UK Government in an attempt to influence use of MMC on public sector funded projects.
As previously mentioned, while MMC suggests a modern approach, some of its processes - notably prefabrication, but also standardisation of components - were extensively deployed during the 20th century. MMC, particularly in the UK, has been challenging to implement due to the volatility of the UK housing market, [ 41 ] while the increasingly globalised nature of the supply chain for products such as panelised cladding systems also creates issues - for example, concerns about working conditions in remote off-site factories, and the de-skilling impacts on traditional capabilities in local communities. [ 41 ] Moreover, re-classifying activities as manufacturing rather than construction would also materially impact the headline labour productivity of the construction sector. [ 41 ] Also "Off-site factories are essentially transient entities. There is hence no guarantee they will be able to supply replacement components in the future. And the more that manufactured components rely on 'high-precision engineering' the less malleable they are in terms of future adaptation." [ 41 ]
After high-profile business failures in the sector (including Ilke Homes), a 2024 study proposed steps to improve public perceptions of MMC and increase industry adoption. Confidence had also been adversely affected by the disjointed nature of MMC organisations, poor communication defining MMC, and unfair value comparisons. The study made a series of recommendations, including standardisation of systems, development of an MMC glossary and rethinking planning policies. [ 42 ] | https://en.wikipedia.org/wiki/Modern_methods_of_construction |
In mathematics, modern triangle geometry , or new triangle geometry , is the body of knowledge relating to the properties of a triangle discovered and developed roughly since the beginning of the last quarter of the nineteenth century. Triangles and their properties were the subject of investigation since at least the time of Euclid . In fact, Euclid's Elements contains description of the four special points – centroid , incenter , circumcenter and orthocenter - associated with a triangle. Even though Pascal and Ceva in the seventeenth century, Euler in the eighteenth century and Feuerbach in the nineteenth century and many other mathematicians had made important discoveries regarding the properties of the triangle, it was the publication in 1873 of a paper by Emile Lemoine (1840–1912) with the title "On a remarkable point of the triangle" that was considered to have, according to Nathan Altschiller-Court, "laid the foundations...of the modern geometry of the triangle as a whole." [ 1 ] [ 2 ] The American Mathematical Monthly , in which much of Lemoine's work is published, declared that "To none of these [geometers] more than Émile-Michel-Hyacinthe Lemoine is due the honor of starting this movement of modern triangle geometry". [ 3 ] The publication of this paper caused a remarkable upsurge of interest in investigating the properties of the triangle during the last quarter of the nineteenth century and the early years of the twentieth century. A hundred-page article on triangle geometry in Klein's Encyclopedia of Mathematical Sciences published in 1914 [ 4 ] bears witness to this upsurge of interest in triangle geometry. [ 5 ]
In the early days, the expression "new triangle geometry" referred to only the set of interesting objects associated with a triangle like the Lemoine point , Lemoine circle, Brocard circle and the Lemoine line . Later the theory of correspondences which was an offshoot of the theory of geometric transformations was developed to give coherence to the various isolated results. With its development, the expression "new triangle geometry" indicated not only the many remarkable objects associated with a triangle but also the methods used to study and classify these objects. Here is a definition of triangle geometry from 1887: "Being given a point M in the plane of the triangle, we can always find, in an infinity of manners, a second point M' that corresponds to the first one according to an imagined geometrical law; these two points have between them geometrical relations whose simplicity depends on the more or less the lucky choice of the law which unites them and each geometrical law gives rise to a method of transformation a mode of conjugation which it remains to study." (See the conference paper titled "Teaching new geometrical methods with an ancient figure in the nineteenth and twentieth centuries: the new triangle geometry in textbooks in Europe and USA (1888–1952)" by Pauline Romera-Lebret presented in 2009. [ 6 ] )
However, this escalation of interest soon collapsed and triangle geometry was completely neglected until the closing years of the twentieth century. In his "Development of Mathematics", Eric Temple Bell offers his judgement on the status of modern triangle geometry in 1940 thus: "The geometers of the 20th Century have long since piously removed all these treasures to the museum of geometry where the dust of history quickly dimmed their luster." (The Development of Mathematics, p. 323) [ 5 ] Philip Davis has suggested several reasons for the decline of interest in triangle geometry. [ 5 ] These include:
A further revival of interest was witnessed with the advent of the modern electronic computer . The triangle geometry has again become an active area of research pursued by a group of dedicated geometers. As epitomizing this revival, one can point out the formulation of the concept of a " triangle centre " and the compilation by Clark Kimberling of an encyclopedia of triangle centers containing a listing of nearly 50,000 triangle centers and their properties and also the compilation of a catalogue of triangle cubics with detailed descriptions of several properties of more than 1200 triangle cubics. [ 7 ] [ 8 ] The open access journal Forum Geometricorum founded by Paul Yiu of Florida Atlantic University in 2001 also provided a tremendous impetus in furthering this new found enthusiasm for triangle geometry. Unfortunately, since 2019, the journal is not accepting submissions although back issues are still available online.
For a given triangle ABC with centroid G, the symmedian through the vertex is the reflection of the line AG in the bisector of the angle A. There are three symmedians for a triangle one passing through each vertex. The three symmedians are concurrent and the point of concurrency, commonly denoted by K, is called the Lemoine point or the symmedian point or the Grebe point of triangle ABC. If the sidelengths of triangle ABC are a , b , c the baricentric coordinates of the Lemoine point are a 2 : b 2 : c 2 . It has been described as "one of the crown jewels of modern geometry". [ 9 ] There are several earlier references to this point in the mathematical literature details of which are available in John Mackay ' history of the symmedian point. [ 10 ]
In fact, the concurrency of the symmedians is a special case of a more general result: For any point P in the plane of triangle ABC, the isogonals of the lines AP, BP, CP are concurrent, the isogonal of AP (respectively BP, CP) being the reflection of the line AP in the bisector of the angle A (respectively B, C). The point of concurrency is called the isogonal conjugate of P. In this terminology, the Lemoine point is the isogonal conjugate of the centroid.
The points of intersections of the lines through the Lemoine point of a triangle ABC parallel to the sides of the triangle lie on a circle called the first Lemoine circle of triangle ABC. The center of the first Lemoine circle lies midway between the circumcenter and the lemoine point of the triangle,
The points of intersections of the antiparallels to the sides of triangle ABC through the Lemoine point of a triangle ABC lie on a circle called the second Lemoine circle or the cosine circle of triangle ABC. The name "cosine circle" is due to the property of the second Lemoine circle that the lengths of the segments intercepted by the circle on the sides of the triangle proportional to the cosines of the angles opposite to the sides. The center of the second Lemoine circle is the Lemoine point.
Any triangle ABC and its tangential triangle are in perspective and the axis of perspectivity is called the Lemoine axis of triangle ABC. It is the trilinear polar of the symmedian point of triangle ABC and also the polar of K with regard to the circumcircle of triangle ABC. [ 11 ] [ 12 ]
A quick glance into the world of modern triangle geometry as it existed during the peak of interest in triangle geometry subsequent to the publication of Lemoine's paper is presented below. This presentation is largely based on the topics discussed in William Gallatly's book [ 13 ] published in 1910 and Roger A Johnsons' book [ 14 ] first published in 1929.
Two triangles are said to be poristic triangles if they have the same incircle and circumcircle. Given a circle with Center O and radius R and another circle with center I and radius r , there are an infinite number of triangles ABC with Circle O( R ) as circumcircle and I( r ) as incircle if and only if OI 2 = R 2 − 2 Rr . These triangles form a poristic system of triangles. The loci of certain special points like the centroid as the reference triangle traces the different triangles poristic with it turn out to often be circles and points. [ 15 ]
For any point P on the circumcircle of triangle ABC, the feet of perpendiculars from P to the sides of triangle ABC are collinear and the line of collinearity is the well-known Simson line of P. [ 16 ]
Given a point P, let the feet of perpendiculars from P to the sides of the triangle ABC be D, E, F. The triangle DEF is called the pedal triangle of P. [ 17 ] The antipedal triangle of P is the triangle formed by the lines through A, B, C perpendicular to PA, PB, PC respectively. Two points P and Q are called counter points if the pedal triangle of P is homothetic to the antipedal triangle of Q and the pedal triangle of Q is homothetic to the antipedal triangle of P. [ 18 ] [ 19 ]
Given any line l , let P, Q, R be the feet of perpendiculars from the vertices A, B, C of triangle ABC to l . The lines through P. Q, R perpendicular respectively to the sides BC, CA, AB are concurrent and the point of concurrence is the orthopole of the line l with respect to the triangle ABC. In modern triangle geometry, there is a large body of literature dealing with properties of orthopoles. [ 20 ] [ 21 ]
Let of circles be described on the sides BC, CA, AB of triangle ABC whose external segments contain the two triads of angles C, A, B and B, C, A respectively. Each triad of circles determined by a triad of angles intersect at a common point thus yielding two such points. These points are called the Brocard points of triangle ABC and are usually denoted by Ω , Ω ′ {\displaystyle \Omega ,\Omega ^{\prime }} . If P is the first Brocard point (which is the Brocard point determined by the first triad of circles) then the angles PBC, PCA and PAB are equal to each other and the common angle is called the Brocard angle of triangle ABC and is commonly denoted by ω {\displaystyle \omega } The Brocard angle is given by
The Brocard points and the Brocard angles have several interesting properties. [ 22 ] [ 23 ]
One of the most significant ideas that has emerged during the revival of interest in triangle geometry during the closing years of twentieth century is the notion of triangle center . This concept introduced by Clark Kimberling in 1994 unified in one notion the very many special and remarkable points associated with a triangle. [ 24 ] Since the introduction of this idea, nearly no discussion on any result associated with a triangle is complete without a discussion on how the result connects with the triangle centers.
A real-valued function f of three real variables a , b , c may have the following properties:
If a non-zero f has both these properties it is called a triangle center function . If f is a triangle center function and a , b , c are the side-lengths of a reference triangle then the point whose trilinear coordinates are f ( a , b , c ) : f ( b , c , a ) : f ( c , a , b ) is called a triangle center .
Clark Kimberling is maintaining a website devoted to a compendium of triangle centers. The website named Encyclopedia of Triangle Centers has definitions and descriptions of nearly 50,000 triangle centers.
Another unifying notion of contemporary modern triangle geometry is that of a central line . This concept unifies the several special straight lines associated with a triangle. The notion of a central line is also related to the notion of a triangle center.
Let ABC be a plane triangle and let ( x : y : z ) be the trilinear coordinates of an arbitrary point in the plane of triangle ABC .
A straight line in the plane of triangle ABC whose equation in trilinear coordinates has the form
where the point with trilinear coordinates ( f ( a , b , c ) : g ( a , b , c ) : h ( a , b , c ) ) is a triangle center,
is a central line in the plane of triangle ABC relative to the triangle ABC . [ 25 ] [ 26 ]
Let X be any triangle center of the triangle ABC .
A triangle conic is a conic in the plane of the reference triangle and associated with it in some way. For example, the circumcircle and the incircle of the reference triangle are triangle conics. Other examples are the Steiner ellipse which is an ellipse passing through the vertices and having its centre at the centroid of the reference triangle, the Kiepert hyperbola which is a conic passing through the vertices, the centroid and the orthocentre of the reference triangle and the Artzt parabolas which are parabolas touching two sidelines of the reference triangle at vertices of the triangle. Some recently studied triangle conics include Hofstadter ellipses and yff conics . However, there is no formal definition of the terminology of triangle conic in the literature; that is, the relations a conic should have with the reference triangle so as to qualify it to be called a triangle conic have not been precisely formulated. WolframMathWorld has a page titled "Triangle conics" which gives a list of 42 items (not all of them are conics) without giving a definition of triangle conic. [ 27 ]
Cubic curves arise naturally in the study of triangles. For example, the locus of a point P in the plane of the reference triangle ABC such that, if the reflections of P in the sidelines of triangle ABC are P a , P b , P c , then the lines AP a , BP b and CP c are concurrent is a cubic curve named Neuberg cubic . It is the first cubic listed in Bernard Gibert's Catalogue of Triangle Cubics . This Catalogue lists more than 1200 triangle cubics with information on each curve such as the barycentric equation of the curve, triangle centers which lie on the curve, locus properties of the curve and references to literature on the curve.
The entry of computers had a deciding influence on the course of development in the interest in triangle geometry witnessed during the closing years of the twentieth century and the early years of the current century. Some of the ways in which the computers had influenced this course have been delineated by Philip Davis. [ 5 ] Computers have been used to generate new results in triangle geometry. [ 28 ] A survey article published in 2015 gives an account of some of the important new results discovered by the computer programme "Discoverer". [ 29 ] The following sample of theorems gives a flavor of the new results discovered by Discoverer.
Sava Grozdev, Hiroshi Okumura, Deko Dekov are maintaining a web portal dedicated to computer discovered encyclopedia of Euclidean geometry . [ 30 ] | https://en.wikipedia.org/wiki/Modern_triangle_geometry |
Modern valence bond theory is the application of valence bond theory (VBT) with computer programs that are competitive in accuracy and economy, with programs for the Hartree–Fock or post-Hartree-Fock methods. The latter methods dominated quantum chemistry from the advent of digital computers because they were easier to program. The early popularity of valence bond methods thus declined. It is only recently that the programming of valence bond methods has improved. These developments are due to and described by Gerratt, Cooper, Karadakov and Raimondi (1997); [ 1 ] Li and McWeeny (2002); Joop H. van Lenthe and co-workers (2002); [ 2 ] Song, Mo, Zhang and Wu (2005); and Shaik and Hiberty (2004) [ 3 ]
While molecular orbital theory (MOT) describes the electronic wavefunction as a linear combination of basis functions that are centered on the various atoms in a species ( linear combination of atomic orbitals ), VBT describes the electronic wavefunction as a linear combination of several valence bond structures. [ 4 ] Each of these valence bond structures can be described using linear combinations of either atomic orbitals, delocalized atomic orbitals ( Coulson-Fischer theory ), or even molecular orbital fragments. [ 5 ] Although this is often overlooked, MOT and VBT are equally valid ways of describing the electronic wavefunction, and are actually related by a unitary transformation . Assuming MOT and VBT are applied at the same level of theory, this relationship ensures that they will describe the same wavefunction, but will do so in different forms. [ 4 ]
Heitler and London's original work on VBT attempts to approximate the electronic wavefunction as a covalent combination of localized basis functions on the bonding atoms. [ 6 ] In VBT, wavefunctions are described as the sums and differences of VB determinants, which enforce the antisymmetric properties required by the Pauli exclusion principle . Taking H 2 as an example, the VB determinant is
In this expression, N is a normalization constant, and a and b are basis functions that are localized on the two hydrogen atoms, often considered simply to be 1s atomic orbitals. The numbers are an index to describe the electron (i.e. a(1) represents the concept of ‘electron 1’ residing in orbital a). ɑ and β describe the spin of the electron. The bar over b in | a b ¯ | {\displaystyle \left\vert a{\overline {b}}\right\vert } indicates that the electron associated with orbital b has β spin (in the first term, electron 2 is in orbital b, and thus electron 2 has β spin). By itself, a single VB determinant is not a proper spin-eigenfunction, and thus cannot describe the true wavefunction. [ 5 ] However, by taking the sum and difference (linear combinations) of VB determinants, two approximate wavefunctions can be obtained:
Φ HL is the wavefunction as described by Heiter and London originally, and describes the covalent bonding between orbitals a and b in which the spins are paired, as expected for a chemical bond. Φ T is a representation of the bond that where the electron spins are parallel, resulting in a triplet state. This is a highly repulsive interaction, so this description of the bonding will not play a major role in determining the wave function.
Other ways of describing the wavefunction can also be constructed. Specifically, instead of considering a covalent interaction, the ionic interactions can be considered, resulting in the wavefunction
This wavefunction describes the bonding in H 2 as the ionic interaction between an H + and an H − .
Since none of these wavefunctions, Φ HL (covalent bonding) or Φ I (ionic bonding) perfectly approximates the wavefunction, a combination of these two can be used to describe the total wavefunction
where λ and μ are coefficients that can vary from 0 to 1. In determining the lowest energy wavefunction, these coefficients can be varied until a minimum energy is reached. λ will be larger in bonds that have more covalency, while μ will be larger in bonds that are more ionic. In the specific case of H 2 , λ ≈ 0.75, and μ ≈ 0.25.
The orbitals that were used as the basis (a and b) do not necessarily have to be localized on the atoms involved in bonding. Orbitals that are partially delocalized onto the other atom involved in bonding can also be used, as in the Coulson-Fischer theory . Even the molecular orbitals associated with a portion of a molecule can be used as a basis set, a processes referred to as using fragment orbitals. [ 5 ]
For more complicated molecules, Φ VBT could consider several possible structures that all contribute in various degrees (there would be several coefficients, not just λ and μ). An example of this is the Kekule and Dewar structures used in describing benzene .
Note that all normalization constants were ignored in the discussion above for simplicity.
The application of VBT and MOT to computations that attempt to approximate the Schrödinger equation began near the middle of the 20th century, but MOT quickly became the preferred approach between the two. The relative computational ease of doing calculations with non-overlapping orbitals in MOT is said to have contributed to its popularity. [ 1 ] In addition, the successful explanation of π-systems, pericyclic reactions, and extended solids further cemented MOT as the preeminent approach. [ 4 ] Despite this, the two theories are just two different ways of representing the same wavefunction. As shown below, at the same level of theory, the two methods lead to the same results.
The relationship between MOT and VBT can be made more clear by directly comparing the results of the two theories for the hydrogen molecule, H 2 . Using MOT, the same basis orbitals (a and b) can be used to describe the bonding. Combining them in a constructive and destructive manner gives two spin-orbitals
The ground state wavefunction of H 2 would be that where the σ orbital is doubly occupied, which is expressed as the following Slater determinant (as required by MOT)
This expression for the wavefunction can be shown to be equivalent to the following wavefunction
which is now expressed in terms of VB determinants. This transformation does not alter the wavefunction in any way, only the way that the wavefunction is represented. This process of going from an MO description to a VB description can be referred to as ‘mapping MO wavefunctions onto VB wavefunctions’, and is fundamentally the same process as that used to generate localized molecular orbitals . [ 5 ]
Rewriting the VB wavefunction derived above, we can clearly see the relationship between MOT and VBT
Thus, at its simplest level, MOT is just VBT, where the covalent and ionic contributions (the first and second terms, respectively) are equal. This is the basis of the claim that MOT does not correctly predict the dissociation of molecules. When MOT includes configuration interaction (MO-CI), this allows the relative contributions of the covalent and ionic contributions to be altered. This leads to the same description of bonding for both VBT and MO-CI. In conclusion, the two theories, when brought to a high enough level of theory, will converge. Their distinction is in the way they are built up to that description.
Note that in all of the aforementioned discussions, as with the derivation of H 2 for VBT, normalization constants were ignored for simplicity.
When describing the relationship between MOT and VBT, a few examples are commonly perceived as failures of VBT. However, these often arise from an incomplete or inaccurate use of VBT. [ 7 ] [ 5 ]
It is known that O 2 has a triplet ground state , but a classic Lewis structure depiction of oxygen would not indicate that any unpaired electrons exist. Perhaps because Lewis structures and VBT often depict the same structure as the most stable state, this misinterpretation has persisted. [ 7 ] However, as has been consistently demonstrated with VBT calculations, the lowest energy state is that with two, three electron π-bonds, which is the triplet state. [ 8 ]
The photoelectron spectrum (PES) of methane is commonly used as an argument as to why MO theory is superior to VBT. [ 5 ] [ 7 ] From an MO calculation (or even just a qualitative MOT diagram), it can be seen that the HOMO is a triply degenerate state, while the HOMO-1 is a single degenerate state. By invoking Koopman's theorem , one can predict that there would be two distinct peaks in the ionization spectrum of methane. Those would be by exciting an electron from the t 2 orbitals or the a 1 orbital, which would result in a 3:1 ratio in intensity. This is corroborated by experiment. However, when one examines the VB description of CH 4 , it is clear that there are 4 equivalent bonds between C and H. If one were to invoke Koopman's Theorem (which is implicitly done when claiming that VBT is inadequate to describe PES), a single ionization energy peak would be predicted. However, Koopman's Theorem cannot be applied to orbitals that are not the canonical molecular orbitals, and thus a different approach is required to understand the ionization potentials of methane from VBT. To do this, the ionized product, CH 4 + must be analyzed. The VB wavefunction of CH 4 + would be an equal combination of 4 structures, each having 3 two-electron bonds, and 1 one-electron bond. Based on group theory arguments, these states must give rise to a triply degenerate T 2 state and a single degenerate A 1 state. A diagram showing the relative energies of the states is shown below, and it can be seen that there exist two distinct transitions from the CH 4 state with 4 equivalent bonds to the two CH 4 + states. [ 5 ]
Listed below are a few notable VBT methods that are applied in modern computational software packages. [ 5 ]
This was one of the first ab initio computational methods developed that utilized VBT. Using Coulson-Fischer type basis orbitals, this method uses singly-occupied, instead of doubly-occupied orbitals, as the basis set. This allows from the distance between paired electrons to increase during variational optimization, lowering the resultant energy. [ 8 ] The total wavefunction is described by a single set of orbitals, rather than a linear combination of multiple VB structures. GVB is considered to be a user-friendly method for new practitioners. [ 5 ]
SCGVB (or sometimes SCVB/full GVB) [ 9 ] is an extension of GVB that still uses delocalized orbitals, whose delocalization can adjust with molecular structure. In addition, the electronic wavefunction is still a single product of orbitals. The difference is that the spin functions are allowed to adjust simultaneously with the orbitals during energy minimization procedures. This is considered to be one of the best VB descriptions of the wavefunction that relies on only a single configuration. [ 5 ]
This is a method that often gets confused as a traditional VB method. [ 5 ] Instead, this is a localization procedure that maps the full configuration interaction Hartree-Fock wavefunction ( CASSCF ) onto valence bond structures. [ 10 ]
There are a large number of different valence bond methods. Most use n valence bond orbitals for n electrons. If a single set of these orbitals is combined with all linear independent combinations of the spin functions , we have spin-coupled valence bond theory . The total wave function is optimized using the variational method by varying the coefficients of the basis functions in the valence bond orbitals and the coefficients of the different spin functions. In other cases only a sub-set of all possible spin functions is used. Many valence bond methods use several sets of the valence bond orbitals. It is important to note here that different authors use different names for these different valence bond methods.
Several research groups have produced computer programs for modern valence bond calculations that are freely available. | https://en.wikipedia.org/wiki/Modern_valence_bond_theory |
Moderne Algebra is a two-volume German textbook on graduate abstract algebra by Bartel Leendert van der Waerden ( 1930 , 1931 ), originally based on lectures given by Emil Artin in 1926 and by Emmy Noether ( 1929 ) from 1924 to 1928. The English translation of 1949–1950 had the title Modern algebra , though a later, extensively revised edition in 1970 had the title Algebra .
The book was one of the first textbooks to use an abstract axiomatic approach to groups , rings , and fields , and was by far the most successful, becoming the standard reference for graduate algebra for several decades. It "had a tremendous impact, and is widely considered to be the major text on algebra in the twentieth century." [ 1 ]
In 1975 van der Waerden described the sources he drew upon to write the book. [ 2 ]
In 1997 Saunders Mac Lane recollected the book's influence: [ 3 ]
Moderne Algebra has a rather confusing publication history, because it went through many different editions, several of which were extensively rewritten with chapters and major topics added, deleted, or rearranged. In addition the new editions of first and second volumes were issued almost independently and at different times, and the numbering of the English editions does not correspond to the numbering of the German editions. In 1955 the title was changed from "Moderne Algebra" to "Algebra" following a suggestion of Brandt, with the result that the two volumes of the third German edition do not even have the same title.
For volume 1, the first German edition was published in 1930, the second in 1937 (with the axiom of choice removed), the third in 1951 (with the axiom of choice reinstated, and with more on valuations ). [ 4 ] The fourth edition appeared in 1955 (with the title changed to Algebra ), the fifth in 1960, the sixth in 1964, the seventh in 1966, the eighth in 1971, the ninth in 1993. For volume 2, the first edition was published in 1931, the second in 1940, the third in 1955 (with the title changed to Algebra ), the fourth in 1959 (extensively rewritten, with elimination theory replaced by algebraic functions of 1 variable), [ 5 ] the fifth in 1967, and the sixth in 1993. The German editions were all published by Springer.
The first English edition was published in 1949–1950 and was a translation of the second German edition. There was a second edition in 1953, and a third edition under the new title Algebra in 1970 translated from the 7th German edition of volume 1 and the 5th German edition of volume 2. The three English editions were originally published by Ungar, though the 3rd English edition was later reprinted by Springer.
There were also Russian editions published in 1976 and 1979, and Japanese editions published in 1959 and 1967–1971. | https://en.wikipedia.org/wiki/Moderne_Algebra |
In mathematics , there are many senses in which a sequence or a series is said to be convergent. This article describes various modes (senses or species) of convergence in the settings where they are defined. For a list of modes of convergence , see Modes of convergence (annotated index)
Each of the following objects is a special case of the types preceding it: sets , topological spaces , uniform spaces , topological abelian group , normed spaces , Euclidean spaces , and the real/complex numbers. Also, any metric space is a uniform space.
Convergence can be defined in terms of sequences in first-countable spaces . Nets are a generalization of sequences that are useful in spaces which are not first countable. Filters further generalize the concept of convergence.
In metric spaces, one can define Cauchy sequences . Cauchy nets and filters are generalizations to uniform spaces . Even more generally, Cauchy spaces are spaces in which Cauchy filters may be defined. Convergence implies "Cauchy convergence", and Cauchy convergence, together with the existence of a convergent subsequence implies convergence. The concept of completeness of metric spaces, and its generalizations is defined in terms of Cauchy sequences.
In a topological abelian group , convergence of a series is defined as convergence of the sequence of partial sums. An important concept when considering series is unconditional convergence , which guarantees that the limit of the series is invariant under permutations of the summands.
In a normed vector space , one can define absolute convergence as convergence of the series ( Σ | b k | {\displaystyle \Sigma |b_{k}|} ). Absolute convergence implies Cauchy convergence of the sequence of partial sums (by the triangle inequality ), which in turn implies absolute convergence of some grouping (not reordering). The sequence of partial sums obtained by grouping is a subsequence of the partial sums of the original series. The convergence of each absolutely convergent series is an equivalent condition for a normed vector space to be Banach (i.e.: complete).
Absolute convergence and convergence together imply unconditional convergence, but unconditional convergence does not imply absolute convergence in general, even if the space is Banach, although the implication holds in R d {\displaystyle \mathbb {R} ^{d}} .
The most basic type of convergence for a sequence of functions (in particular, it does not assume any topological structure on the domain of the functions) is pointwise convergence . It is defined as convergence of the sequence of values of the functions at every point. If the functions take their values in a uniform space, then one can define pointwise Cauchy convergence, uniform convergence , and uniform Cauchy convergence of the sequence.
Pointwise convergence implies pointwise Cauchy convergence, and the converse holds if the space in which the functions take their values is complete. Uniform convergence implies pointwise convergence and uniform Cauchy convergence. Uniform Cauchy convergence and pointwise convergence of a subsequence imply uniform convergence of the sequence, and if the codomain is complete, then uniform Cauchy convergence implies uniform convergence.
If the domain of the functions is a topological space and the codomain is a uniform space, local uniform convergence (i.e. uniform convergence on a neighborhood of each point) and compact (uniform) convergence (i.e. uniform convergence on all compact subsets ) may be defined. "Compact convergence" is always short for "compact uniform convergence," since "compact pointwise convergence" would mean the same thing as "pointwise convergence" (points are always compact).
Uniform convergence implies both local uniform convergence and compact convergence, since both are local notions while uniform convergence is global. If X is locally compact (even in the weakest sense: every point has compact neighborhood), then local uniform convergence is equivalent to compact (uniform) convergence. Roughly speaking, this is because "local" and "compact" connote the same thing.
Pointwise and uniform convergence of series of functions are defined in terms of convergence of the sequence of partial sums.
For functions taking values in a normed linear space , absolute convergence refers to convergence of the series of positive, real-valued functions Σ | g k | {\displaystyle \Sigma |g_{k}|} . "Pointwise absolute convergence" is then simply pointwise convergence of Σ | g k | {\displaystyle \Sigma |g_{k}|} .
Normal convergence is convergence of the series of non-negative real numbers obtained by taking the uniform (i.e. "sup") norm of each function in the series (uniform convergence of Σ | g k | {\displaystyle \Sigma |g_{k}|} ). In Banach spaces , pointwise absolute convergence implies pointwise convergence, and normal convergence implies uniform convergence.
For functions defined on a topological space, one can define (as above) local uniform convergence and compact (uniform) convergence in terms of the partial sums of the series. If, in addition, the functions take values in a normed linear space, then local normal convergence (local, uniform, absolute convergence) and compact normal convergence (absolute convergence on compact sets ) can be defined.
Normal convergence implies both local normal convergence and compact normal convergence. And if the domain is locally compact (even in the weakest sense), then local normal convergence implies compact normal convergence.
If one considers sequences of measurable functions , then several modes of convergence that depend on measure-theoretic, rather than solely topological properties, arise. This includes pointwise convergence almost-everywhere, convergence in p -mean and convergence in measure . These are of particular interest in probability theory . | https://en.wikipedia.org/wiki/Modes_of_convergence |
A mode of toxic action is a common set of physiological and behavioral signs that characterize a type of adverse biological response. [ 1 ] A mode of action should not be confused with mechanism of action , which refer to the biochemical processes underlying a given mode of action. [ 2 ] Modes of toxic action are important, widely used tools in ecotoxicology and aquatic toxicology because they classify toxicants or pollutants according to their type of toxic action. There are two major types of modes of toxic action: non-specific acting toxicants and specific acting toxicants. Non-specific acting toxicants are those that produce narcosis , while specific acting toxicants are those that are non-narcotic and that produce a specific action at a specific target site.
Non-specific acting modes of toxic action result in narcosis ; therefore, narcosis is a mode of toxic action. Narcosis is defined as a generalized depression in biological activity due to the presence of toxicant molecules in the organism. [ 1 ] The target site and mechanism of toxic action through which narcosis affects organisms are still unclear, but there are hypotheses that support that it occurs through alterations in the cell membranes at specific sites of the membranes, such as the lipid layers or the proteins bound to the membranes. Even though continuous exposure to a narcotic toxicant can produce death , if the exposure to the toxicant is stopped, narcosis can be reversible.
Toxicants that at low concentrations modify or inhibit some biological process by binding at a specific site or molecule have a specific acting mode of toxic action. [ 1 ] However, at high enough concentrations, toxicants with specific acting modes of toxic actions can produce narcosis that may or may not be reversible. Nevertheless, the specific action of the toxicant is always shown first because it requires lower concentrations. [ citation needed ]
There are several specific acting modes of toxic action:
The pioneer work of identifying the major categories of modes of toxic action (see description above) was conducted by investigators from the U.S. Environmental Protection Agency (EPA) at the Duluth Laboratory using fish, [ 1 ] [ 3 ] [ 4 ] [ 5 ] reason why they named the categories as Fish Acute Toxicity Syndromes (FATS). They proposed the FATS by assessing the behavioral and physiological responses of the fish when subjected to toxicity tests , such as locomotive activities, body color , ventilation patterns, cough rate , heart rate , and others. [ 2 ]
It has been proposed that modes of toxic action could be estimated by developing a data set of critical body residues (CBR). [ 3 ] The CBR is the whole-body concentration of a chemical that is associated with a given adverse biological response [ 1 ] and it is estimated using a partition coefficient and a bioconcentration factor. The whole-body residues are reasonable first approximations of the amount of chemical present at the toxic action site(s). [ 3 ] Because different modes of toxic action generally appear to be associated with different ranges of body residues, [ 3 ] modes of toxic action can then be separated into categories. However, it is unlikely that every chemical has the same mode of toxic action in every organism, so this variability should be considered. [ 3 ] The effects of mixture toxicity should be considered as well, even though mixture toxicity it's generally additive , [ 3 ] chemicals with more than one mode of toxic action may contribute to toxicity. [ 4 ]
Modeling has become a common used tool to predict modes of toxic action in the last decade. The models are based in Quantitative Structure-Activity Relationships (QSARs), which are mathematical models that relate the biological activity of molecules to their chemical structures and corresponding chemical and physicochemical properties. [ 1 ] QSARs can then predict modes of toxic action of unknown compounds by comparing its characteristic toxicity profile and chemical structure to reference compounds with known toxicity profiles and chemical structures. [ 2 ] Russom and colleagues [ 6 ] were one of the first group of researchers being able to classify modes of toxic action with the use of QSARs; they classified 600 chemicals as narcotics. Even though QSARs are a useful tool for predicting modes of toxic action, chemicals having multiple modes of toxic action can obscure QSAR analyses. Therefore, these models are continuously being developed. [ citation needed ]
The objective of environmental risk assessment is to protect the environment from adverse effects. [ 2 ] Researchers are further developing QSAR models with the ultimate goal providing a clear insight about a mode of toxic action, but also about what the actual target site is, the concentration of the chemical at this target site, and the interaction occurring at the target site, [ 2 ] as well as to predict the modes of toxic action in mixtures . Information on the mode of toxic action is crucial not only in understanding joint toxic effects and potential interactions between chemicals in mixtures, but also for developing assays for the evaluation of complex mixtures in the field.
The combination of behavioral and physiological responses, CBR estimates, and chemical fate and bioaccumulation QSAR models can be a powerful regulatory tool [ 3 ] to address pollution and toxicity in areas where effluents are discharged. | https://en.wikipedia.org/wiki/Modes_of_toxic_action |
Mode shapes in physics are specific patterns of vibration that a structure or system can exhibit when it oscillates at its natural frequencies. These patterns describe the relative displacement of different parts of the system during vibration.
In applied mathematics, mode shapes are a manifestation of eigenvectors which describe the relative displacement of two or more elements in a mechanical system [ 1 ] or wave front. [ 2 ] A mode shape is a deflection pattern related to a particular natural frequency and represents the relative displacement of all parts of a structure for that particular mode.
Mode shapes have a mathematical meaning as 'eigenvectors' or 'eigenfunctions' of the eigenvalue problem which arises, studying particular solutions of the partial differential equation of a system.
This linear algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Modeshape |
The Modification and Replacement Parts Association is the Washington, D.C. -based trade association that represents manufacturers of government-approved aftermarket aircraft parts . These aircraft parts are often known as PMA parts, from the acronym for Parts Manufacturer Approval . The manufacture of PMA parts is regulated in the United States by the Federal Aviation Administration . [ 1 ]
MARPA's primary focus is on representing the needs of the PMA parts community in the United States. These companies manufacture after market aircraft parts under strict FAA guidelines. [ 2 ]
In order to obtain a PMA from the FAA, the manufacturer must demonstrate that it has
The manufacturers that meet these standards are issued Parts Manufacturer Approval (PMA) by the FAA.
Because the United States was the first nation to adopt rules permitting the manufacture of aircraft after market parts (and for many decades was the only nation with these rules), the PMA industry is primarily concentrated in the United States. Other countries have started to investigate and even adopt PMA rules. Some non-US manufacturers have started to investigate the benefits that MARPA can bring to them. In the future, MARPA's manufacturing membership may reflect a more international base.
MARPA's members include many air carriers from around the world.
MARPA has an air carrier committee that remains quite active. [ 4 ] The committee was originally formed by MARPA Director Josh Abelson, and since then has been chaired by Cori Ferguson of Alaska Airlines (2006–2008), David Linebaugh of Delta Air Lines (2008–2011), Steve Jones of American Airlines (2011–2013), William Barrett of American Airlines (2013), Edward Pozzi of United Airlines (2013–2014), Michael Rennick of Delta Air Lines (2014–2017), and is now co-chaired by Deidre Vance of American Airlines (2016–present) and Donald "Donny" Douglas of Delta Air Lines (2017–present). Air carriers engage in a complete engineering review of a PMA part before they choose to install it (despite the fact that the FAA has already approved the part). Despite this engineering review (or perhaps because of it), air carriers have been adopting PMA usage in their fleets and recognizing reliability improvements and cost savings. [ 5 ]
The industry that MARPA represents, PMA parts manufacturers, is not new. PMA parts have been around as commercial competitors since at least the 1950s. [ 6 ] But the popularity of PMA parts has boomed since the turn of the millennium. [ 7 ]
MARPA was formed by George Powell, Jim Reum and Jason Dickstein. During the 1990s, the three men were members of the FAA Aviation Rulemaking Advisory Committee (ARAC) Part 21 Working Group, which was charged with developing updated manufacturing regulations. During their work on the working group, the three men came to understand the value of and need for a trade association to represent the interests of the PMA manufacturing community. On May 18, 1998 they held a "MARPA Kick-off Meeting" at Fado Irish Pub on 7th St. in Washington, DC and signed, on a restaurant placemat, an agreement to form such a trade association.
The association spent its early years in the Phoenix, Arizona , area under the guidance of George Powell and his wife Gloria Nations. In 2007, when Jason Dickstein became president, MARPA moved its headquarters to Washington, D.C., in order to be closer to the decision-makers that affect MARPA's members. [ 8 ]
MARPA has had three presidents:
MARPA has had four chairmen of the board:
MARPA keeps its members informed about changes in the regulations that affect them, and also informs them about proposed changes in order to permit them to file the comments with the government. [ 9 ] MARPA works with government agencies, like the FAA and EASA, to help promote aircraft parts safety. MARPA also works with the government agencies to help promote reasonable standards of commercial fairness in the industry. MARPA is a frequent contributor to the rule making process in the United States. [ 10 ] For example, from 2009 through 2011, MARPA's president (Jason Dickstein) served on the FAA Aviation Rulemaking Committee (ARC) for Safety Management Systems (SMS), [ 11 ] and MARPA played a significant role in helping to craft a version of the rule that would most effectively promote safety. [ 12 ]
MARPA also develops programs and tools to assist its members in regulatory compliance and in meeting their safety goals. In 2004, the FAA and PMA community discussed establishing new standards for Continued Operational Safety (COS). [ 13 ] MARPA formed a committee of its members and drafted COS Guidance. [ 14 ] [ 15 ] The MARPA COS guidance represents a voluntary system under which PMA manufacturers track their parts through their entire life cycle in order to be able to collect reliability data. The purpose of the data is to support parts reliability – that will allow manufacturers to proactively respond to potential safety issues early enough in the life cycle of the parts that the potential safety issue may not have even manifested itself. [ 16 ] The data allows PMA companies to provide the same level of response and support to its customers that their competitors provide. [ 17 ]
MARPA has been recognized as a trade association committed to helping PMA companies to comply with FAA and other regulations. [ 18 ]
MARPA joined with the Air Transport Association in a letter to CFM International, asking them to rescind a series of advertisements that made unsupported claims. [ 19 ] Although CFM did not admit to MARPA's and ATA's claims, it did replace its factually-unsupported advertisements with ads that did not make the same allegations.
Fairness remained an issue as some of the larger manufacturers of aircraft products attempted to inhibit trade in PMAs by inaccurately claiming that PMA parts were not subject to the same Instructions for Continued Airworthiness as the parts that they replace. [ 20 ] MARPA continued to inform the industry and the government about these issues and in 2008, the FAA released a Special Airworthiness Information Bulletin to remind the industry that PMA parts are FAA-approved in the United States, and therefore PMA parts are valid replacement parts that continue to enjoy the same Instructions for Continued Airworthiness as the parts that they replace. [ 21 ] [ 22 ]
MARPA has always had an education focus. MARPA has assisted its members in complying with the regulations and developing strategies to obtain PMAs. [ 23 ] It also educates the PMA community about the changing standards that apply to them through their website, a monthly newsletter, a blog, and an Annual Conference.
In addition to keeping the PMA industry informed about the changes that surround it, MARPA has also focused on educating PMA users (and potential users) about the benefits of using PMA parts. [ 24 ] MARPA's efforts to educate air carriers are credited with a significant decrease in the amount of time that an air carrier needs in order to review and approve a PMA part for use in their fleet. [ 25 ]
MARPA also supports member efforts to educate the public about PMA parts. [ 26 ]
MARPA holds an annual conference each year to share information about the rules that apply to PMA, changing legal standards that could influence PMA manufacturers, and industry conditions that affect PMA. [ 27 ] There usually one or more analysis of current aviation industry economic conditions and airline market conditions in order to facilitate industry strategic planning. [ 28 ] The Conference generally features significant participation by the FAA and other government entities (like EASA, U.S. Air Force, U.S. International Trade Administration, U.S. Justice Department, etc.), which makes it an important forum for exchanging information.
MARPA is based in the United States and mostly made up of US-based companies because the United States was the first country to have a body of regulations that supported government-approved manufacturing of aftermarket aircraft parts. Other countries have begun to investigate options to support such industries.
The European Aviation Safety Agency has issued a decision verifying that PMA parts from the United States are accepted and used in European Community member countries. [ 29 ] This decision followed a long-standing policy of the Joint Aviation Authorities (JAA). The tenets of European acceptance also exist in the bilateral agreements between the United States and a number of European nations. [ 30 ]
In 2010, MARPA appeared on a panel at the FAA / EASA International Safety Meeting. [ 31 ] The panel also include Eurocopter and both FAA and EASA senior management. The purpose of the panel was to explore the safety paradigms that could be introduced to address the relationships between the aftermarket and the OEM market. MARPA's presentation focused on the Association's initiatives designed to provide a foundation for data gathering, data analysis, predictive risk management, and cooperative hazard mitigation.
MARPA's air carrier and maintenance facility members include a number of non-US parties. | https://en.wikipedia.org/wiki/Modification_and_Replacement_Parts_Association |
The term modifications in genetics refers to both naturally occurring and engineered changes in DNA.
Incidental, or natural mutations occur through errors during replication and repair, either spontaneously or due to environmental stressors. Intentional modifications are done in a laboratory for various purposes, developing hardier seeds and plants, and increasingly to treat human disease. The use of gene editing technology remains controversial.
Modifications are changes in an individual's DNA due to incidental mutation or intentional genetic modification using various biotechnologies. [ 1 ] Although confusion exists between the terms "modification" and "mutation" as they are often used interchangeably, modification differentiates itself from mutation because it acts as an umbrella term, encompassing both definitions of mutation and genetic engineering. [ 1 ] Both of these subcategorizations result in a change affect an organism's obervable characteristics, also known as their phenotype , caused due to alterations in an organism's genotype, or their specific alleles, resulting in altered gene expression. [ 2 ] Although heritability plays a large role in an individual's expression, like in cases of epigenetic modifications, not all instances of modification are heritable. No matter the origins of such variation at the genetic level, it clearly impacts the creation and interaction of proteins, changing cell function, phenotype, and organism function. [ 3 ]
Genetic modifications can occur naturally, through aforementioned mutations in an organism's genome, or through biotechnological methods of selecting a gene of interest to manipulate in order to make something new or improve upon what already exists. [ 1 ] This distinction between changes that occur naturally and those that are intentional is key to understanding the difference between mutation and genetic engineering. [ 1 ]
Mutation can be more accurately defined as any non-combinatorial change in phenotype that is able to be consistently inherited from parent to offspring over generations. [ 1 ] Mutations can be attributed to many factors and come in numerous different forms, however they can mostly be attributed to mistakes that occur during DNA replication or exposure to external factors. [ 4 ] As cellular processes are highly efficient, they are not perfect causing disparities between organisms of the same species. [ 4 ] These disparities can cause many different phenotypic effects of all intensities, ranging from no observable impact at all to possible inviability. [ 4 ] Due to environmental conditions such as climate, diet, oxygen levels, light cycles, and mutagens or chemicals which are strongly related to disease susceptibility, genes expression can vary. [ 5 ] [ 6 ] The timing and duration of exposure to such elements is a critical factor as well as it can significantly impact the phenotypic response of an organism, generally increasing severity with time. [ 7 ]
Methods:
There are several methods, or forms, of mutation that exist including spontaneous mutation, errors during replication and repair, as well as mutation due to environmental effects. [ 8 ] These origins of mutations can cause many different types of mutations which influence gene expression on both large and small scales. [ 8 ]
Genetic engineering is a type of intentional genetic modification, which uses biotechnology to alter an organism's genome. [ citation needed ] According to World Health Organization (WHO), genetically modified organisms are defined as "Organisms (i.e. plants, animals or microorganisms) in which the genetic material (DNA) has been altered in a way that does not occur naturally by mating and/or natural recombination”. [ 9 ] This type of modification can involve insertions or deletions of DNA bases into the existing genetic code. [ 10 ] In biotechnological methodology, a series of four steps are used in order to create a genetically modified organism (GMO). [ 11 ]
Methods:
CRISPR methods are a popularly used type of the aforementioned process of genome editing. [ 12 ] Standing for 'Clustered Regularly Interspaced Short Palindromic Repeats', CRISPR gene editing allows scientists to manually alter gene expression, correcting errors or creating new variations. [ 12 ] Since 2012, scientists have worked to develop this technology, which has the opportunity to both cure genetic diseases and genetically modify traits to be most desirable, purposefully altering DNA with a high degree of precision. [ 13 ]
The dandelion : Most dandelions have long stems, but an increase in potential threats in their environment have caused average dandelion stem length to decrease within certain species, allowing them to better avoid said threats. [ 14 ] This adaptation was possible due to a mutation occurring in a shorter-stemmed individual being selected by environmental pressures. [ 15 ] Because the shorter-stemmed dandelions had higher fitness than long-stemmed dandelions and were able to survive more often, the genetic frequency of the population was altered, genetically modified through the original occurrence of a mutation. [ 15 ]
Sickle cell disease : In a healthy individual, the HBB gene is responsible for encoding hemoglobin which carries oxygen throughout the body. [ 16 ] However, when a person has this disease due to inheriting two mutated copies of the HBB gene due to a base pair point mutation, their red blood cells are shaped differently. [ 16 ] This altered shape results in blockages of blood flow with serious health implications. [ 16 ] On the other hand, those who inherit only one mutated copy of this gene have higher protection against malaria . [ 17 ]
Alzheimer's disease : In a synthetic example in a laboratory, scientists isolated the amyloid precursor protein (APP) gene, known for using Alzheimer's in humans, and transmitted it into the nerve cells of worms. [ 10 ] In doing this, scientists aimed to study the progression of Alzheimer's disease in this simple organism by tagging the APP protein with green fluorescent protein which allowed them to better visualize the gene as the worm aged. [ 10 ] Using what they learned from experimentation with the simple worm and the APP gene, scientists increased their understanding of this gene's role in causing Alzheimer's disease in humans. [ 10 ]
Insulin : The first use of genetically modified bacteria was for the medical insulin that diabetics need to medically control their blood sugar. [ 18 ] Through the following steps, scientists are able to genetically engineer a medical product that millions of people rely on worldwide: [ 10 ]
Fast-paced developments in the CRISPR-Cas9 gene editing technology has increased both the concerns and relevance of this ethical controversy as it has become more popularly used. [ 19 ] [ 20 ] The scientific community recommends continued evaluation of risks and benefits of utilizing genetically modified organisms in everyday life. [ 21 ] Genetic modifications are studied by researchers under controlled conditions after they are inserted into an organism, allowing for improved scientific understanding of the effects of certain gene modifications and certain organism responses.
In April 2015, gene editing technology was used on human embryos and debate about the ethics of such actions persisted since. [ 22 ] Nonetheless, scientists and policymakers are in agreement that public deliberations should decide the legality of germ line genome editing. [ 23 ] Modifying a person's non-heritable DNA with the goal of improving one's medical condition is generally accepted and has a plethora of ethical protocols monitoring such procedures. [ 19 ] This includes modifications like organ donation, bone marrow transplants, and types of gene therapies, all of which consider cultural and religious values. [ 19 ] On the other hand, there is contention surrounding heritable gene modification exemplified by the fact that 19 countries have outlawed this type of genetic modification. [ 19 ] For those who believe the vitility of a human embryo is equivalent to an adult, genome editing in early development occurring at or immediately following fertilization raises moral concerns. [ 13 ] In order to mitigate these concerns, studies using human embryos have used embryos from left over IVF treatments. [ 13 ] Scientists have also suggested creating fertilized zygotes from donated sperm and eggs strictly for research purposes. [ 13 ] However, this raises an additional ethical concern within the scientific community about the concept of a zygote being created only to be used for experimentation. [ 13 ]
Debate also surrounds genetically engineered food in terms of the controversial health and environmental effects that it may have in various time scales. [ 24 ] Regulations have been implemented for approval of genetically modified foods to reduce some uncertainty that remains in this field. [ 25 ] The reasons in favor of development of genetically modified foods include to meet the demands of the exponentially growing human population, to substitute for the decrease in farmable land, and to address the decrease in genetic diversity which limits possible improvement of species. [ 24 ] Additional benefits include improved herbicide tolerance, increased pest and bacterial/fungal/viral resistance, higher stress tolerance, and increased nutrient content within the organism. [ 26 ] The biotechnology of genetic engineering provides the opportunity to achieve global food security by addressing these problems and positively impacting the food production economy. [ 24 ] Potential health risks are also being researched and there are requirements for the safety of genetically modified foods to be clarified before they are consumed by the public. Environmental consequences are also considered due to disruptions within the food web when these organisms are added to a previously balanced ecosystem. [ 24 ] As genetic modification is so fast, the environment may not be able to adapt and integrate the new organism into the ecosystem or it could have unwanted effects on its surroundings. [ 26 ] Other impacts on the environment include unnatural gene flow, modification of soil and water chemistry, and reduction of species diversity. [ 25 ]
Ethical considerations regarding gene editing are largely controversial within the scientific community due to its open ended implications for the rest of society. [ 19 ] Although no consensus has been reached, there are plans in place to utilize the available resources to continue education, scientific research as well as research on ethical, legal, and social issues associated with genetic modification. [ 20 ] | https://en.wikipedia.org/wiki/Modifications_(genetics) |
Modified Huffman coding is used in fax machines to encode black-on-white images ( bitmaps ). It combines the variable-length codes of Huffman coding with the coding of repetitive data in run-length encoding .
The basic Huffman coding provides a way to compress files with much repeating data, like a file containing text, where the alphabet letters are the repeating objects. However, a single scan line contains only two kinds of elements – white pixels and black pixels – which can be represented directly as 0 and 1. This "alphabet" of only two symbols is too small to apply the Huffman coding directly. But if we first use run-length encoding, we can have more objects to encode. Here is an example taken from the article on run-length encoding :
A hypothetical scan line, with B representing a black pixel and W representing white, might read as follows:
With a run-length encoding (RLE) data compression algorithm applied to the above hypothetical scan line, it can be rendered as follows:
Here we see that we have several different numbers in addition to the two items "white" and "black." These numbers provide plenty of additional items to use, so the Huffman coding can be directly applied to the sequence above to reduce the size even more.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Modified_Huffman_coding |
Modified Newtonian dynamics ( MOND ) is a theory that proposes a modification of Newton's laws to account for observed properties of galaxies . Modifying Newton's law of gravity results in modified gravity , while modifying Newton's second law results in modified inertia . The latter has received little attention compared to the modified gravity version. Its primary motivation is to explain galaxy rotation curves without invoking dark matter , and is one of the most well-known theories of this class. However, it has not gained widespread acceptance, with the majority of astrophysicists supporting the Lambda-CDM model as providing the better fit to observations. [ 1 ] [ 2 ]
MOND was developed in 1982 and presented in 1983 by Israeli physicist Mordehai Milgrom . [ 3 ] [ 4 ] Milgrom noted that galaxy rotation curve data, which seemed to show that galaxies contain more matter than is observed, could also be explained if the gravitational force experienced by a star in the outer regions of a galaxy decays more slowly than predicted by Newton's law of gravity. MOND modifies Newton's laws for extremely small accelerations which are common in galaxies and galaxy clusters. This provides a good fit to galaxy rotation curve data while leaving the dynamics of the Solar System with its strong gravitational field intact. [ 5 ] However, the theory predicts that the gravitational field of the galaxy could influence the orbits of Kuiper Belt objects through the external field effect , which is unique to MOND. [ 6 ]
Since Milgrom's original proposal, MOND has seen some successes. It is capable of explaining several observations in galaxy dynamics, [ 7 ] [ 8 ] a number of which can be difficult for Lambda-CDM to explain. [ 9 ] [ 10 ] However, MOND struggles to explain a range of other observations, such as the acoustic peaks of the cosmic microwave background and the matter power spectrum of the large scale structure of the universe . Furthermore, because MOND is not a relativistic theory, it struggles to explain relativistic effects such as gravitational lensing and gravitational waves . Finally, a major weakness of MOND is that all galaxy clusters, including the famous Bullet cluster , show a residual mass discrepancy even when analyzed using MOND. [ 7 ] [ 11 ] [ 12 ]
A minority of astrophysicists continue to work on the theory. Jacob Bekenstein developed a relativistic generalization of MOND in 2004, TeVeS , which however had its own set of problems. Another notable attempt was by Constantinos Skordis [ d ] and Tom Złośnik [ d ] in 2021, which proposed a relativistic model of MOND that is compatible with cosmic microwave background observations, but appears to be highly contrived. [ 1 ] [ 13 ]
Several independent observations suggest that the visible mass in galaxies and galaxy clusters is insufficient to account for their dynamics, when analyzed using Newton's laws. This discrepancy – known as the "missing mass problem" – was identified by several observers , most notably by Swiss astronomer Fritz Zwicky in 1933 through his study of the Coma cluster . [ 15 ] [ 16 ] This was subsequently extended to include spiral galaxies by the 1939 work of Horace Babcock on Andromeda . [ 17 ]
These early studies were augmented and brought to the attention of the astronomical community in the 1960s and 1970s by the work of Vera Rubin , who mapped in detail the rotation velocities of stars in a large sample of spirals. While Newton's Laws predict that stellar rotation velocities should decrease with distance from the galactic centre, Rubin and collaborators found instead that they remain almost constant [ 18 ] – the rotation curves are said to be "flat". This observation necessitates at least one of the following:
Option (1) leads to the dark matter hypothesis; option (2) leads to MOND.
The majority of astronomers , astrophysicists , and cosmologists accept dark matter as the explanation for galactic rotation curves (based on general relativity , and hence Newtonian mechanics), and are committed to a dark matter solution of the missing-mass problem. [ 19 ] The primary difference between supporters of ΛCDM and MOND is in the observations for which they demand a robust, quantitative explanation, and those for which they are satisfied with a qualitative account, or are prepared to leave for future work. Proponents of MOND emphasize predictions made on galaxy scales (where MOND enjoys its most notable successes) and believe that a cosmological model consistent with galaxy dynamics has yet to be discovered. Proponents of ΛCDM require high levels of cosmological accuracy (which concordance cosmology provides) and argue that a resolution of galaxy-scale issues will follow from a better understanding of the complicated baryonic astrophysics underlying galaxy formation . [ 7 ] [ 20 ]
The basic premise of MOND is that while Newton's laws have been extensively tested in high-acceleration environments (in the Solar System and on Earth), they have not been verified for objects with extremely low acceleration, such as stars in the outer parts of galaxies. This led Milgrom to postulate a new effective gravitational force law (sometimes referred to as "Milgrom's law") that relates the true acceleration of an object to the acceleration that would be predicted for it on the basis of Newtonian mechanics. [ 3 ] This law, the keystone of MOND, is chosen to reproduce the Newtonian result at high acceleration but leads to different ("deep-MOND") behavior at low acceleration:
Here F N is the Newtonian force, m is the object's (gravitational) mass , a is its acceleration, μ ( x ) is an as-yet unspecified function (called the interpolating function ), and a 0 is a new fundamental constant which marks the transition between the Newtonian and deep-MOND regimes. Agreement with Newtonian mechanics requires
and consistency with astronomical observations requires
Beyond these limits, the interpolating function is not specified by the hypothesis.
Milgrom's law can be interpreted in two ways:
Milgrom's law states that for accelerations smaller than a 0 accelerations increasingly depart from the standard M · G / r 2 Newtonian relationship of mass and distance, wherein gravitational strength is linearly proportional to mass and the inverse square of distance. Instead, the theory holds that the gravitational field below the a 0 value, increases with the square root of mass and decreases linearly with distance . Whenever the gravitational field is larger than a 0 , whether it be near the center of a galaxy or an object near or on Earth, MOND yields dynamics that are nearly indistinguishable from those of Newtonian gravity. For instance, if the gravitational acceleration equals a 0 at a distance from a mass, at ten times that distance, Newtonian gravity predicts a hundredfold decline in gravity whereas MOND predicts only a tenfold reduction. By fitting Milgrom's law to rotation curve data, Begeman et al. found a 0 ≈ 1.2 × 10 −10 m/s 2 to be optimal. [ 23 ] The value of Milgrom’s acceleration constant has not varied meaningfully since then. [ 24 ] [ 25 ] [ 26 ] [ 27 ] The value of a 0 also establishes the distance from a mass at which Newtonian and MOND dynamics diverge.
By itself, Milgrom's law is not a complete and self-contained physical theory , but rather an empirically motivated variant of an equation in classical mechanics. Its status within a coherent non-relativistic hypothesis of MOND is akin to Kepler's Third Law within Newtonian mechanics. Milgrom's law provides a succinct description of observational facts, but must itself be grounded in a proper field theory. Several complete classical hypotheses have been proposed (typically along "modified gravity" as opposed to "modified inertia" lines). These generally yield Milgrom's law exactly in situations of high symmetry and otherwise deviate from it slightly. For MOND as modified gravity two complete field theories exist called AQUAL and QUMOND . A subset of these non-relativistic hypotheses have been further embedded within relativistic theories, which are capable of making contact with non-classical phenomena (e.g., gravitational lensing ) and cosmology . [ 28 ] Distinguishing both theoretically and observationally between these alternatives is a subject of current research.
Milgrom's law uses an interpolation function to join its two limits together. It represents a simple algorithm to convert Newtonian gravitational accelerations to observed kinematic accelerations and vice versa. Many functions have been proposed in the literature although currently there is no single interpolation function that satisfies all constraints. [ 29 ] Two common choices are the "simple interpolating function" and the "standard interpolating function". [ 28 ] Each has a μ {\displaystyle \mu } and a ν {\displaystyle \nu } direction to convert the Milgromian gravitational field to the Newtonian and vice versa such that:
The simple interpolation function is:
The standard interpolation function is:
Thus, in the deep-MOND regime ( a ≪ a 0 ):
Data from spiral and elliptical galaxies favour the simple interpolation function, [ 30 ] [ 31 ] whereas data from lunar laser ranging and radio tracking data of the Cassini spacecraft towards Saturn require interpolation functions that converge to Newtonian gravity faster. [ 29 ] [ 32 ]
Milgrom's law requires incorporation into a complete hypothesis if it is to satisfy conservation laws and provide a unique solution for the time evolution of any physical system. [ 33 ] Each of the theories described here reduce to Milgrom's law in situations of high symmetry, but produce different behavior in detail.
Both AQUAL and QUMOND propose changes to the gravitational part of the classical matter action, and hence interpret Milgrom's law as a modification of Newtonian gravity as opposed to Newton's second law. The alternative is to turn the kinetic term of the action into a functional depending on the trajectory of the particle. Such "modified inertia" theories, however, are difficult to use because they are time-nonlocal, require energy and momentum to be non-trivially redefined to be conserved, and have predictions that depend on the entirety of a particle's orbit. [ 28 ]
The first hypothesis of MOND (dubbed AQUAL, for "A QUAdratic Lagrangian") was constructed in 1984 by Milgrom and Jacob Bekenstein . [ 4 ] AQUAL generates MONDian behavior by modifying the gravitational term in the classical Lagrangian from being quadratic in the gradient of the Newtonian potential to a more general function F. This function F reduces to the μ {\displaystyle \mu } -version of the interpolation function after varying the over ϕ {\displaystyle \phi } using the principle of least action . In Newtonian gravity and AQUAL the Lagrangians are:
where ϕ {\displaystyle \phi } is the standard Newtonian gravitational potential and F is a new dimensionless function. Applying the Euler–Lagrange equations in the standard way then leads to a non-linear generalization of the Newton–Poisson equation :
This can be solved given suitable boundary conditions and choice of F to yield Milgrom's law (up to a curl field correction which vanishes in situations of high symmetry). AQUAL uses the μ {\displaystyle \mu } -version of the chosen interpolation function.
An alternative way to modify the gravitational term in the Lagrangian is to introduce a distinction between the true (MONDian) acceleration field a and the Newtonian acceleration field a N . The Lagrangian may be constructed so that a N satisfies the usual Newton-Poisson equation, and is then used to find a via an additional algebraic but non-linear step, which is chosen to satisfy Milgrom's law. This is called the "quasi-linear formulation of MOND", or QUMOND, [ 34 ] and is particularly useful for calculating the distribution of "phantom" dark matter that would be inferred from a Newtonian analysis of a given physical situation. [ 28 ] QUMOND has become the dominant MOND field theory since it was first formulated in 2010 because it is much more computationally friendly and may be more intuitive to those who have worked on numerical simulations of Newtonian gravity. [ 35 ] QUMOND uses the ν {\displaystyle \nu } -version of the chosen interpolation function. QUMOND and AQUAL can be derived from each other using a Legendre transform. [ 36 ] The QUMOND Lagrangian is:
Since this Lagrangian does not explicitly depend on time and is invariant under spatial translations this means energy and momentum are conserved according to Noether's theorem . Varying over r yields m a = m g {\displaystyle ma=mg} showing that the weak equivalence principle always applies in QUMOND. However, since ϕ {\displaystyle \phi } and ϕ N {\displaystyle \phi _{N}} are not identical and are non-linearly related this means that the strong equivalence principle must be violated. This can be observed by measuring the external field effect. Furthermore, by varying over ϕ {\displaystyle \phi } we get the following Newton-Poisson equation familiar from Newtonian gravity but now with a subscript to denote that in QUMOND this equation determines the auxiliary gravitational field ϕ N {\displaystyle \phi _{N}} : [ 34 ]
∇ 2 ϕ N = 4 π G ρ . {\displaystyle \nabla ^{2}\phi _{N}=4\pi G\rho .}
Finally by varying the QUMOND Lagrangian with respect to ϕ N {\displaystyle \phi _{N}} we get the QUMOND field equation: [ 34 ]
These two field equations can be solved numerically for any matter distribution with numerical solvers like Phantom of RAMSES (POR). [ 37 ]
In Newtonian mechanics, an object's acceleration can be found as the vector sum of the acceleration due to each of the individual forces acting on it. This means that a subsystem can be decoupled from the larger system in which it is embedded simply by referring the motion of its constituent particles to their centre of mass; in other words, the influence of the larger system is irrelevant for the internal dynamics of the subsystem. Since Milgrom's law is non-linear in acceleration, MONDian subsystems cannot be decoupled from their environment in this way, and in certain situations this leads to behaviour with no Newtonian parallel. This is known as the "external field effect" (EFE), [ 3 ] for which there exists observational evidence. [ 38 ]
The external field effect is best described by classifying physical systems according to their relative values of a in (the characteristic acceleration of one object within a subsystem due to the influence of another), a ex (the acceleration of the entire subsystem due to forces exerted by objects outside of it), and a 0 :
The external field effect implies a fundamental break with the strong equivalence principle (but not the weak equivalence principle which is required by the Lagrangian [ 4 ] [ 34 ] ). The effect was postulated by Milgrom in the first of his 1983 papers to explain why some open clusters were observed to have no mass discrepancy even though their internal accelerations were below a 0 . It has since come to be recognized as a crucial element of the MOND paradigm.
The dependence in MOND of the internal dynamics of a system on its external environment (in principle, the rest of the universe ) is strongly reminiscent of Mach's principle , and may hint towards a more fundamental structure underlying Milgrom's law. In this regard, Milgrom has commented: [ 40 ]
It has been long suspected that local dynamics is strongly influenced by the universe at large, a-la Mach's principle, but MOND seems to be the first to supply concrete evidence for such a connection. This may turn out to be the most fundamental implication of MOND, beyond its implied modification of Newtonian dynamics and general relativity, and beyond the elimination of dark matter.
Since MOND was specifically designed to produce flat rotation curves, these do not constitute evidence for the hypothesis, but every matching observation adds to support of the empirical law. Nevertheless, proponents claim that a broad range of astrophysical phenomena at the galactic scale are neatly accounted for within the MOND framework. [ 28 ] [ 42 ] Many of these came to light after the publication of Milgrom's original papers and are difficult to explain using the dark matter hypothesis. The most prominent are the following:
While acknowledging that Milgrom's law provides a succinct and accurate description of a range of galactic phenomena, many physicists reject the idea that classical dynamics itself needs to be modified and attempt instead to explain the law's success by reference to the behavior of dark matter. Some effort has gone towards establishing the presence of a characteristic acceleration scale as a natural consequence of the behavior of cold dark matter halos, [ 70 ] [ 71 ] although Milgrom has argued that such arguments explain only a small subset of MOND phenomena . [ 72 ] An alternative proposal is to ad hoc modify the properties of dark matter (e.g., to make it interact strongly with itself or baryons) in order to induce the tight coupling between the baryonic and dark matter mass that the observations point to. [ 73 ] [ 74 ] Finally, some researchers suggest that explaining the empirical success of Milgrom's law requires a more radical break with conventional assumptions about the nature of dark matter. One idea (dubbed "dipolar dark matter") is to make dark matter gravitationally polarizable by ordinary matter and have this polarization enhance the gravitational attraction between baryons. [ 75 ]
Some ultra diffuse galaxies , such as NGC 1052-DF2 , originally appeared to be free of dark matter. Were this the case, it would have posed a problem for MOND because it cannot explain the rotation curves. [ a ] However, further research showed that the galaxies were at a different distance than previously thought, leaving the galaxies with plenty of room for dark matter. [ 76 ] [ 77 ] [ 78 ] The idea that a single value of a 0 can fit all the different galaxies' rotation curves has also been criticized, [ 79 ] [ 80 ] although this finding is disputed. [ 81 ] [ 82 ] It has also been claimed that MOND offers a poor fit to both the HI column density and size of Lyα absorbers . [ 83 ] Modified inertia versions of MOND have long suffered from poor theoretical compatibility with cherished physical principles such as conservation laws. Researchers working on MOND generally do not interpret it as a modification of inertia, with only very limited work done on this area.
Almost the entire solar system has gravitational field strengths many orders of magnitude higher than a 0 so the increase in gravity due to MOND is negligible. However solar system tests are extremely precise and most observations have proven difficult for MOND to explain. Notably data from lunar laser ranging rules out the simple interpolation function. [ 32 ] Radio tracking data of the Cassini spacecraft towards Saturn rules out both the simple and standard interpolation functions by testing an anomalous quadrupole effect predicted by MOND. [ 29 ] It is also possible that a full fit of Solar System ephemerides where the masses of planets and asteroids are allowed to vary can accommodate this anomalous quadrupole effect since these are currently determined using general relativity only. [ 35 ] Observations of long period comets also seem to conflict with higher order predictions of MOND. [ 84 ] Furthermore, laboratory experiments of Newton's second law seem to have ruled out modified inertia versions of MOND with experimental accelerations reaching as low as 0.1% of a 0 without deviation from the Newtonian expectation. [ 22 ] Some solar system observations could support MOND as it has been suggested that the orbits of Kuiper Belt objects are best explained through MOND's external field effect, rather than through a hypothetical planet nine . [ 6 ] It has also been claimed that the variation in the measurements of Newton's gravitational constant are caused by MOND acting perpendicularly to the Earth's gravitational field. [ 85 ]
The most serious problem facing Milgrom's law is that galaxy clusters show a residual mass discrepancy even when analyzed using MOND. [ 7 ] [ 83 ] This problem is long standing and has been dubbed the "cluster conundrum" . This undermines MOND as an alternative to dark matter, although the amount of extra mass required is only a fifth that of a Newtonian analysis and could be in the form of normal matter. [ 86 ] It has been speculated that ~2 eV neutrinos could account for the cluster observations in MOND while preserving the hypothesis's successes at the galaxy scale. [ 87 ] [ 88 ] [ 89 ] Analysis of lensing data for the galaxy cluster Abell 1689 shows that this residual missing mass problem in MOND becomes more severe towards the cores of galaxy clusters. [ 90 ]
The 2006 observation a pair of colliding galaxy clusters known as the " Bullet Cluster " has been claimed as a significant challenge for all theories proposing a modified gravity solution to the missing mass problem, including MOND. [ 91 ] Astronomers measured the distribution of stellar and gas mass in the clusters using visible and X-ray light, respectively, and also mapped the gravitational potential using gravitational lensing. As shown in the images on the right, the X-ray gas is in the center, while the galaxies are on the outskirts. During the collision, the X-ray gas interacted and slowed down, remaining in the center, while the galaxies largely passed by one another, as the distances between them were vast. The gravitational potential reveals two large concentrations centered on the galaxies, not on the X-ray gas, where most of the normal matter is located. In ΛCDM one would also expect the clusters to each have a dark matter halo that would pass through each other during the collision (assuming, as is conventional, that dark matter is collisionless). This expectation for the dark matter is a clear explanation for the offset between the peaks of the gravitational potential and the X-ray gas. It is this offset between the gravitational potential and normal matter that was claimed by Clowe et al. as "A Direct Empirical Proof of the Existence of Dark Matter" arguing that modified gravity theories fail to account for it. [ 91 ] However, this study by Clowe et al. made no attempt to analyze the Bullet Cluster using MOND or any other modified gravity theory. Furthermore, in the same year, Angus et al. demonstrated that MOND does indeed reproduce the offset between the gravitational potential and the X-ray gas in this highly non-spherically symmetric system. [ 92 ] In MOND, one would expect the "missing mass" to be centred on regions which experience accelerations lower than a 0 , which, in the case of the Bullet Cluster, correspond to the areas containing the galaxies, not the X-ray gas. Nevertheless, MOND still fails to fully explain this cluster, as it does with other galaxy clusters, due to the remaining mass residuals in several core regions of the Bullet Cluster. [ 87 ]
Besides these observational issues, MOND and its relativistic generalizations are plagued by theoretical difficulties. [ 93 ] [ 94 ] Several ad hoc and inelegant additions to general relativity are required to create a theory compatible with a non-Newtonian non-relativistic limit, though the predictions in this limit are rather clear.
In 2004, Jacob Bekenstein formulated TeVeS , the first complete relativistic hypothesis using MONDian behaviour. [ 95 ] TeVeS is constructed from a local Lagrangian (and hence respects conservation laws), and employs a unit vector field , a dynamical and non-dynamical scalar field , a free function and a non-Einsteinian metric in order to yield AQUAL in the non-relativistic limit (low speeds and weak gravity). TeVeS has enjoyed some success in making contact with gravitational lensing and structure formation observations, [ 96 ] but faces problems when confronted with data on the anisotropy of the cosmic microwave background , [ 97 ] the lifetime of compact objects, [ 98 ] and the relationship between the lensing and matter overdensity potentials. [ 99 ] TeVeS also appears inconsistent with the speed of gravitational waves according to LIGO. [ 100 ] The speed of gravitational waves was measured to be equal to the speed of light to high precision using gravitational wave event GW170817 .
Several newer relativistic generalizations of MOND exist, including BIMOND and generalized Einstein aether theory . [ 28 ] There is also a relativistic generalization of MOND that assumes a Lorentz-type invariance as the physical basis of MOND phenomenology. [ 101 ] Recently Skordis and Złośnik proposed a relativistic model of MOND that is compatible with cosmic microwave background observations, the matter power spectrum and the speed of gravity. [ 13 ]
It has been claimed that MOND is generally unsuited to forming the basis of cosmology. [ 93 ] A significant piece of evidence in favor of standard dark matter is the observed anisotropies in the cosmic microwave background . [ 102 ] While ΛCDM is able to explain the observed angular power spectrum, MOND has a much harder time. [ 103 ] It is possible to construct relativistic generalizations of MOND that can fit CMB observations, [ 13 ] but it requires terms that do not look natural, and several observations (such as the amount of gravitational lensing) are still difficult to explain. [ 1 ] MOND also encounters difficulties explaining structure formation , with density perturbations in MOND perhaps growing so rapidly that too much structure is formed by the present epoch. [ 104 ] However, galaxy surveys appear to show massive galaxy formation occurring at much greater rapidity early in time than is possible according to ΛCDM. [ 105 ]
There is a potential link between MOND and cosmology. It has been noted that the value of a 0 is within an order of magnitude of cH 0 , where c is the speed of light and H 0 is the Hubble constant (a measure of the present-day expansion rate of the universe). [ 3 ] It is also close to the acceleration rate of the universe through Λ c 2 {\displaystyle {\sqrt {\Lambda }}c^{2}} , where Λ is the cosmological constant . [ 106 ] Recent work on a transactional formulation of entropic gravity by Schlatter and Kastner [ 107 ] suggests a natural connection between a 0 , H 0 , and the cosmological constant.
Several observational and experimental tests have been proposed to help distinguish [ 108 ] between MOND and dark matter-based models:
Technical (books & book-length reviews):
Technical (review articles):
Popular: | https://en.wikipedia.org/wiki/Modified_Newtonian_dynamics |
The Modified Philip-Dunne Infiltrometer ( MPD ) is a standardized method for measuring the hydraulic conductivity of soil in the field. It is a falling head method outlined in ASTM International Standard D8152. [ 1 ]
The MPD Infiltrometer consists of a cylinder with an inside diameter of 4 inches (10.16 cm) and a length of 17.5 inches (44.5 cm). The procedure involves inserting the cylinder 5 centimeters into the ground and filling it with one gallon of water. Water level readings are recorded at regular time intervals until the cylinder is drained. The collected data is then used to calculate the saturated hydraulic conductivity (Ksat) using the MPD equation specified in ASTM Standard D8152. [ 1 ]
The MPD method builds upon earlier work in infiltration theory. The Green-Ampt theory, developed in 1911, provided a framework for estimating infiltration rates by incorporating factors like soil suction head, porosity, hydraulic conductivity, and time. [ 2 ]
In 1993, Philip and Dunne expanded on this theory, developing a methodology using a water-filled cylinder to measure head drop over time. Their approach incorporated the Green-Ampt theory to calculate hydraulic conductivity, which eliminates the need for pre-saturating the ground with water. [ 3 ]
The University of Minnesota subsequently modified Philip and Dunne's method by introducing a variation to the calculation in 2007. [ 4 ] This refinement led to the renaming of the method as the Modified Philip-Dunne Infiltrometer (MPD). The revised methodology underwent peer review and in 2018, the MPD test was formally adopted as an ASTM International Standard. ASTM D8152. [ 1 ]
The Nevada Tahoe Conservation District conducted a two-year study (2012–2014) of the Modified Philip Dunne Infiltrometer and compared the results of the MPD to the Constant Head Permeameter, Double Ring Infiltrometer, and the Tension Disc Infiltrometer. The purpose of the study was to determine if the MPD could be used as a faster method to test the hydraulic conductivity of the rain gardens in the Lake Tahoe Region. The results showed the MPD measurements were comparable to the other three test methods. [ 5 ]
The authors of a 2019 study said "that the MPDI is a useful field method to estimate Ks values, but it is not a robust method to estimate Ψ values." [ 6 ]
The Georgia Department of Transportation adopted the MPD method to conduct a study of road side infiltration practices (ditches), to determine the best slope and design of these practices to achieve maximum efficiency removing solid pollutants. [ 7 ]
Per ASTM D8152, the MPD method is not suitable in soils (clay) where infiltration is slower than 2.5 mm per hour. [ 1 ] | https://en.wikipedia.org/wiki/Modified_Philip_Dunne_Infiltrometer |
Modified Wittig–Claisen tandem reaction is a cascade reaction that combines the Wittig reaction and Claisen rearrangement together. The Wittig reaction generates the allyl vinyl ether intermediate that further participates in a Claisen rearrangement to generate the final γ,δ-unsaturated ketone or aldehyde product (Figure "Modified Wittig–Claisen tandem reactions").
The modified Wittig–Claisen tandem reaction has been a useful retrosynthetic strategy and has been applied to the synthesis of various complex natural products and other molecules. This reaction is especially useful for construction of cyclic ketones with double bond at the γ, δ-position. Paquette and co-workers reported the synthesis of 4-cyclooctenone structure by expanding the six-membered ring of a 2-cyclohexanone structure. [ 1 ] The key step was a tandem process that combines Tebbe olefination (a reaction similar to Wittig reaction) with Claisen rearrangement (Figure "Application of modified Wittig–Claisen tandem reactions for construction of 4-cyclooctenone structure").
Tandem Wittig–Claisen reaction has also been applied to the construction of the spiro[pyrrolidin-3,3’-oxindole] ring system in natural products such as horsfiline . [ 2 ] The synthesis started with a simple o-nitrobenzaldehyde . A Wittig–Claisen reaction sequence converted the starting material to a 4-pentenal derivative that could serve as a versatile intermediate for the synthesis of various natural products. In this case, the 4-pentenal derivative was further converted to the natural product horsfiline, the active ingredient of a traditional herbal medicine with analgesic effects. | https://en.wikipedia.org/wiki/Modified_Wittig–Claisen_tandem_reaction |
Modified Active Gas Sampling (MAGS) is an environmental engineering assessment technique which rapidly detects unsaturated soil source areas impacted by volatile organic compounds . The technique was developed by HSA Engineers & Scientists in Fort Myers, Florida in 2002, led by Richard Lewis, Steven Folsom, and Brian Moore. It is being used all over the United States , and has been adopted by the state of Florida in its Dry-cleaning Solvent Cleanup Program. [ 1 ]
MAGS involves the extraction and analysis of soil vapor from a piezometer screened through the unsaturated soil column for the purpose of locating unsaturated zone source material. According to the MAGS Manual, written by HSA and adopted by the Florida Department of Environmental Protection , MAGS is performed "by utilizing a typical regenerative blower fitted to a temporary soil vapor extraction well, [such that]a large volume of soil can be assessed with a limited number of samples. While lacking the resolution of traditional soil sampling methods (e.g., discrete soil sampling, low flow active gas sampling, etc.), the statistical representativeness (in the sense of sample coverage) of MAGS results versus traditional methods is much greater. Moreover, the results of the assessment provide useful transport and exposure assessment information over traditional techniques. Lastly, MAGS is effective as both an initial site assessment and remedial assessment tool, in that, MAGS directly yields data required for remedial design." [ 2 ]
MAGS is an alternative to discrete and composite soil sampling . MAGS, while it does not describe the sample with as much precision as the previously mentioned sampling methods, is more powerful statistically: it represents a larger area of a site which is more useful in determining the presence of a compound. Besides increasing the accuracy in identifying the presence compounds in the soil, MAGS also can quickly and accurately narrow down the location and spread of the compounds after a few trials. Once the location has been determined, more thorough and traditional soil borings can be done in the identified location, instead of sampling a whole site. [ 3 ]
HSA found particular success using the technique at solvent-impacted sites that were showing signs of rebound after initial remediation efforts. These rebounds are commonly the result of multiple (relatively small) release areas that had not been previously discovered with discrete soil sampling. MAGS can be useful in detecting how effectively the site had been cleaned up post-remediation. [ 4 ]
HSA Engineers & Scientists considered patenting MAGS technology, but decided to trademark MAGS instead, asking that those who use the technique credit the firm. [ 4 ]
In 2009, HSA was recognized by the Environmental Business Journal with a Technology Merit Award in the category of remediation for the invention of MAGS technology. [ 1 ] | https://en.wikipedia.org/wiki/Modified_active_gas_sampling |
Modified aldol tandem reaction is a sequential chemical transformation that combines aldol reaction with other chemical reactions that generate enolates . Enolates are a common building block in chemical syntheses and are typically formed by the addition of base to a ketone or aldehyde . Modified Aldol tandem reactions allow similar reactivity to be produced without the need for a base which may have adverse effects in a given chemical synthesis. A representative example is the decarboxylative aldol reaction (Figure "Modified aldol tandem reaction, decarboxylative aldol reaction as an example"), where the enolate is generated via decarboxylation reaction mediated by either transition metals or organocatalysts. Key advantage of this reaction over other types of aldol reaction is the selective generation of an enolate in the presence of aldehydes. This allows for the directed aldol reaction to produce a desired cross aldol.
Transition metals have been used to mediate the modified aldol tandem reaction. Allyl β-keto carboxylates can be used as substrate for palladium-mediated decarboxylative aldol reaction (Figure "Palladium-mediated decarboxylative aldol reaction with allyl β-keto carboxylates"). [ 1 ] The allyl group can be removed by palladium, following decarboxylation reaction selectively generates the enolate at the β-keto group, which could further react with aldehyde to generate aldols.
Using decarboxylation reaction to generate enolate is a common strategy in biosynthetic pathways such as polyketide synthesis, where malonic acid half thioester can be converted to the corresponding enolate for Claisen condensation reaction. Inspired by this, a modified tandem aldol reaction has been developed using the malonic acid half thioester as the enolate source. [ 2 ] A copper based catalyst system has been developed for efficient aldol generation at mild conditions (Figure "Decarboxylative aldol reaction with malonic acid half thioester"). | https://en.wikipedia.org/wiki/Modified_aldol_tandem_reaction |
Modified atmosphere packaging ( MAP ) is the practice of modifying the composition of the internal atmosphere of a package (commonly food packages, drugs, etc.) in order to improve the shelf life . [ 1 ] [ 2 ] The need for this technology for food arises from the short shelf life of food products such as meat, fish, poultry, and dairy in the presence of oxygen . In food, oxygen is readily available for lipid oxidation reactions. Oxygen also helps maintain high respiration rates of fresh produce, which contribute to shortened shelf life. [ 3 ] From a microbiological aspect, oxygen encourages the growth of aerobic spoilage microorganisms . [ 2 ] Therefore, the reduction of oxygen and its replacement with other gases can reduce or delay oxidation reactions and microbiological spoilage. Oxygen scavengers may also be used to reduce browning due to lipid oxidation by halting the auto-oxidative chemical process. Besides, MAP changes the gaseous atmosphere by incorporating different compositions of gases.
The modification process generally lowers the amount of oxygen (O 2 ) in the headspace of the package. Oxygen can be replaced with nitrogen (N 2 ), a comparatively inert gas, or carbon dioxide (CO 2 ). [ 2 ]
A stable atmosphere of gases inside the packaging can be achieved using active techniques, such as gas flushing and compensated vacuum, or passively by designing “breathable” films.
The first recorded beneficial effects of using modified atmosphere date back to 1821. Jacques Étienne Bérard , a professor at the School of Pharmacy in Montpellier, France , reported delayed ripening of fruit and increased shelf life in low-oxygen storage conditions. [ 4 ] Controlled atmosphere storage (CAS) was used from the 1930s when ships transporting fresh apples and pears had high levels of CO 2 in their holding rooms in order to increase the shelf life of the product. [ 5 ] In the 1970s MA packages reached the stores when bacon and fish were sold in retail packs in Mexico. Since then development has been continuous and interest in MAP has grown due to consumer demand.
Atmosphere within the package can be modified passively or actively. [ 6 ] In passive MAP, the high concentration of CO 2 and low O 2 levels in the package is achieved over time as a result of respiration of the product and gas transmission rates of the packaging film. This method is commonly used for fresh respiring fruits and vegetables. Reducing O 2 and increasing CO 2 slows down respiration rate, conserves stored energy, and therefore extended shelf life . [ 7 ] On the other hand, active MA involves the use of active systems such as O 2 and CO 2 scavengers or emitters, moisture absorbers, ethylene scavengers, ethanol emitters and gas flushing in the packaging film or container to modify the atmosphere within the package. [ 7 ]
The mixture of gases selected for a MA package depends on the type of product, the packaging materials and the storage temperature. The atmosphere in an MA package consists mainly of adjusted amounts of N 2 , O 2 , and CO 2. [ 6 ] [ 8 ] Reduction of O 2 promotes delay in deteriorative reactions in foods such as lipid oxidation , browning reactions and growth of spoilage organisms. [ 5 ] [ 6 ] Low O 2 levels of 3-5% are used to slow down respiration rate in fruits and vegetables. [ 6 ] In the case of red meat, however, high levels of O 2 (~80%) are used to reduce oxidation of myoglobin and maintain an attractive bright red color of the meat. [ 9 ] Meat color enhancement is not required for pork, poultry and cooked meats; therefore, a higher concentration of CO 2 is used to extend the shelf life. [ 8 ] Levels higher than 10% of CO 2 are phytotoxic for fruit and vegetables, so CO 2 is maintained below this level.
N 2 is mostly used as a filler gas to prevent pack collapse. [ 5 ] [ 8 ] In addition, it is also used to prevent oxidative rancidity in packaged products such as snack foods by displacing atmospheric air, especially oxygen, therefore extending shelf life. [ 5 ] [ 8 ] The use of noble gases such as helium (He), argon (Ar) and xenon (Xe) to replace N 2 as the balancing gas in MAP can also be used to preserve and extend the shelf life of fresh and minimally processed fruits and vegetables. Their beneficial effects are due to their higher solubility and diffusivity in water, making them more effective in displacing O 2 from cellular sites and enzymatic O 2 receptors. [ 10 ]
There has been a debate regarding the use of carbon monoxide (CO) in the packaging of red meat due to its possible toxic effect on packaging workers. [ 9 ] Its use results in a more stable red color of carboxymyoglobin in meat, which leads to another concern that it can mask evidence of spoilage in the product. [ 5 ] [ 9 ]
Low O 2 and high CO 2 concentrations in packages are effective in limiting the growth of Gram negative bacteria , molds and aerobic microorganisms, such as Pseudomonas spp. High O 2 combined with high CO 2 could have bacteriostatic and bactericidal effects by suppression of aerobes by high CO 2 and anaerobes by high O 2 . [ 10 ] CO 2 has the ability to penetrate bacterial membrane and affect intracellular pH . Therefore, lag phase and generation time of spoilage microorganisms are increased resulting in shelf life extension of refrigerated foods. [ 9 ] Since the growth of spoilage microorganisms are suppressed by MAP, the ability of the pathogens to grow is potentially increased. Microorganisms that can survive under low oxygen environment such as Campylobacter jejuni , Clostridium botulinum , E. coli , Salmonella , Listeria and Aeromonas hydrophila are of major concern for MA packaged products. [ 7 ] Products may appear organoleptically acceptable due to the delayed growth of the spoilage microorganisms but might contain harmful pathogens. [ 7 ] This risk can be minimized by use of additional hurdles such as temperature control (maintain temperature below 3 degrees C), lowering water activity (less than 0.92), reducing pH (below 4.5) or addition of preservatives such as nitrite to delay metabolic activity and growth of pathogens. [ 8 ]
Flexible films are commonly used for products such as fresh produce, meats, fish and bread seeing as they provide suitable permeability for gases and water vapor to reach the desired atmosphere. Pre-formed trays are formed and sent to the food packaging facility where they are filled. The package headspace then undergoes modification and sealing. Pre-formed trays are usually more flexible and allow for a broader range of sizes as opposed to thermoformed packaging materials as different tray sizes and colors can be handled without the risk of damaging the package. [ 11 ] Thermoformed packaging however is received in the food packaging facility as a roll of sheets. Each sheet is subjected to heat and pressure, and is formed at the packaging station. Following the forming, the package is filled with the product, and then sealed. [ 12 ] The advantages that thermoformed packaging materials have over pre-formed trays are mainly cost-related: thermoformed packaging uses 30% to 50% less material, and they are transported as rolls of material. This will amount in significant reduction of manufacturing and transportation costs. [ 11 ]
When selecting packaging films for MAP of fruits and vegetables the main characteristics to consider are gas permeability, water vapor transmission rate, mechanical properties, transparency, type of package and sealing reliability. [ 6 ] Traditionally used packaging films like LDPE (low-density polyethylene), PVC (polyvinyl chloride), EVA (ethylene-vinyl acetate) and OPP (oriented polypropylene ) are not permeable enough for highly respiring products like fresh-cut produce, mushrooms and broccoli. As fruits and vegetables are respiring products, there is a need to transmit gases through the film. Films designed with these properties are called permeable films . Other films, called barrier films, are designed to prevent the exchange of gases and are mainly used with non-respiring products like meat and fish.
MAP films developed to control the humidity level as well as the gas composition in the sealed package are beneficial for the prolonged storage of fresh fruits, vegetables and herbs that are sensitive to moisture. These films are commonly referred to as modified atmosphere/modified humidity packaging (MA/MH) films.
In using form-fill-seal packaging machines, the main function is to place the product in a flexible pouch suitable for the desired characteristics of the final product. These pouches can either be pre-formed or thermoformed. The food is introduced into the pouch, the composition of the headspace atmosphere is changed within the package; it is then heat sealed. [ 11 ] These types of machines are typically called pillow-wrap, which horizontally or vertically form, fill and seal the product. [ 5 ] Form-fill-seal packaging machines are usually used for large scale operations.
In contrast, chamber machines are used for batch processes. A filled pre-formed wrap is filled with the product and introduced into a cavity. The cavity is closed and vacuum is then pulled on the chamber and the modified atmosphere is inserted as desired. Sealing of the package is done through heated sealing bars, and the product is then removed. This batch process is labor-intensive and thus requires a longer period of time; however, it is relatively cheaper than packaging machines which are automated. [ 11 ]
Additionally, snorkel machines are used to modify the atmosphere within a package after the food has been filled. The product is placed in the packaging material and positioned into the machine without the need of a chamber. A nozzle, which is the snorkel, is then inserted into the packaging material. It pulls a vacuum and then flushes the modified atmosphere into the package. The nozzle is removed and the package is heat sealed. This method is suitable for bulk and large operations. [ 11 ]
Many products such as red meat, seafood, minimally processed fruits and vegetables, salads, pasta, cheese, bakery goods, poultry, cooked and cured meats, ready meals and dried foods are packaged under MA. [ 4 ] A summary of optimal gas mixtures for MA products is shown in the following table.
Modified Atmosphere Packaging for different food products and optimal gas mixtures [ 2 ]
Modified atmosphere may be used to store grains.
CO 2 prevents insects and, depending on concentration, mold and oxidation from damaging the grain. Grain stored in this way can remain edible for approximately five years. [ 13 ] One method is placing a block of dry ice in the bottom and filling the can with the grain. Another method is purging the container from the bottom by gaseous carbon dioxide from a cylinder or bulk supply vessel.
Nitrogen gas ( N 2 ) at concentrations of 98% or higher is also used effectively to kill insects in the grain through hypoxia . [ 14 ] However, carbon dioxide has an advantage in this respect, as it kills organisms through hypercarbia and hypoxia (depending on concentration), but it requires concentrations of roughly over 35%. [ 15 ] This makes carbon dioxide preferable for fumigation in situations where a hermetic seal cannot be maintained.
Air-tight storage of grains (sometimes called hermetic storage) relies on the respiration of grain, insects, and fungi that can modify the enclosed atmosphere sufficiently to control insect pests. This is a method of great antiquity, [ 16 ] as well as having modern equivalents. The success of the method relies on having the correct mix of sealing, grain moisture, and temperature. [ 17 ]
A patented process uses fuel cells to exhaust and automatically maintain the exhaustion of oxygen in a shipping container, containing, for example, fresh fish. [ 18 ] | https://en.wikipedia.org/wiki/Modified_atmosphere |
The modified compression field theory (MCFT) is a general model for the load-deformation behaviour of two-dimensional cracked reinforced concrete subjected to shear. It models concrete considering concrete stresses in principal directions summed with reinforcing stresses assumed to be only axial. The concrete stress-strain behaviour was derived originally from Vecchio's tests and has since been confirmed with about 250 experiments performed on two large special purpose testing machines at the University of Toronto . Similar machines have been built in Japan and the United States, providing additional confirmation of the quality of the method's predictions.
The most important assumption in the MCFT model is that the cracked concrete in reinforced concrete can be treated as a new material with empirically defined stress–strain behaviour. This behaviour can differ from the traditional stress–strain curve of a cylinder, for example. The strains used for these stress–strain relationships are average strains, that is, they lump together the combined effects of local strains at cracks, strains between cracks, bond-slip, and crack slip. The calculated stresses are also average stresses in that they implicitly include stresses between cracks, stresses at cracks, interface shear on cracks, and dowel action. For the use of these average stresses and strains to be a reasonable assumption, the distances used in determining the average behaviour must include a few cracks.
Frank J. Vecchio defined the original form of MCFT in 1982 from the testing of 30 reinforced concrete panels subjected to uniform strain states in a specially built tester. The theory of MCFT traces back through the compression field theory of 1978 to the Diagonal compression Field Theory of 1974. The definitive description of the MCFT is in the 1986 American Concrete Institute paper "The Modified Compression Field Theory for Reinforced Concrete Elements Subjected to Shear". | https://en.wikipedia.org/wiki/Modified_compression_field_theory |
The modified discrete cosine transform ( MDCT ) is a transform based on the type-IV discrete cosine transform (DCT-IV), with the additional property of being lapped : it is designed to be performed on consecutive blocks of a larger dataset , where subsequent blocks are overlapped so that the last half of one block coincides with the first half of the next block. This overlapping, in addition to the energy-compaction qualities of the DCT, makes the MDCT especially attractive for signal compression applications, since it helps to avoid artifacts stemming from the block boundaries. As a result of these advantages, the MDCT is the most widely used lossy compression technique in audio data compression . It is employed in most modern audio coding standards , including MP3 , Dolby Digital (AC-3), Vorbis (Ogg), Windows Media Audio (WMA), ATRAC , Cook , Advanced Audio Coding (AAC), [ 1 ] High-Definition Coding (HDC), [ 2 ] LDAC , Dolby AC-4 , [ 3 ] and MPEG-H 3D Audio , [ 4 ] as well as speech coding standards such as AAC-LD (LD-MDCT), [ 5 ] G.722.1 , [ 6 ] G.729.1 , [ 7 ] CELT , [ 8 ] and Opus . [ 9 ] [ 10 ]
The discrete cosine transform (DCT) was first proposed by Nasir Ahmed in 1972, [ 11 ] and demonstrated by Ahmed with T. Natarajan and K. R. Rao in 1974. [ 12 ] The MDCT was later proposed by John P. Princen, A.W. Johnson and Alan B. Bradley at the University of Surrey in 1987, [ 13 ] following earlier work by Princen and Bradley (1986) [ 14 ] to develop the MDCT's underlying principle of time-domain aliasing cancellation (TDAC), described below. (There also exists an analogous transform, the MDST, based on the discrete sine transform , as well as other, rarely used, forms of the MDCT based on different types of DCT or DCT/DST combinations.)
In MP3, the MDCT is not applied to the audio signal directly, but rather to the output of a 32-band polyphase quadrature filter (PQF) bank. The output of this MDCT is postprocessed by an alias reduction formula to reduce the typical aliasing of the PQF filter bank. Such a combination of a filter bank with an MDCT is called a hybrid filter bank or a subband MDCT. AAC, on the other hand, normally uses a pure MDCT; only the (rarely used) MPEG-4 AAC-SSR variant (by Sony ) uses a four-band PQF bank followed by an MDCT. Similar to MP3, ATRAC uses stacked quadrature mirror filters (QMF) followed by an MDCT.
As a lapped transform, the MDCT is somewhat unusual compared to other Fourier-related transforms in that it has half as many outputs as inputs (instead of the same number). In particular, it is a linear function F : R 2 N → R N {\displaystyle F\colon \mathbf {R} ^{2N}\to \mathbf {R} ^{N}} (where R denotes the set of real numbers ). The 2 N real numbers x 0 , ..., x 2 N −1 are transformed into the N real numbers X 0 , ..., X N −1 according to the formula
The normalization coefficient in front of this transform, here unity, is an arbitrary convention and differs between treatments. Only the product of the normalizations of the MDCT and the IMDCT, below, is constrained.
The inverse MDCT is known as the IMDCT . Because there are different numbers of inputs and outputs, at first glance it might seem that the MDCT should not be invertible. However, perfect invertibility is achieved by adding the overlapped IMDCTs of subsequent overlapping blocks, causing the errors to cancel and the original data to be retrieved; this technique is known as time-domain aliasing cancellation ( TDAC ).
The IMDCT transforms N real numbers X 0 , ..., X N −1 into 2 N real numbers y 0 , ..., y 2 N −1 according to the formula
Like for the DCT-IV , an orthogonal transform, the inverse has the same form as the forward transform.
In the case of a windowed MDCT with the usual window normalization (see below), the normalization coefficient in front of the IMDCT should be multiplied by 2 (i.e., becoming 2/ N ).
Although the direct application of the MDCT formula would require O( N 2 ) operations, it is possible to compute the same thing with only O( N log N ) complexity by recursively factorizing the computation, as in the fast Fourier transform (FFT). One can also compute MDCTs via other transforms, typically a DFT (FFT) or a DCT, combined with O( N ) pre- and post-processing steps. Also, as described below, any algorithm for the DCT-IV immediately provides a method to compute the MDCT and IMDCT of even size.
In typical signal-compression applications, the transform properties are further improved by using a window function w n ( n = 0, ..., 2 N − 1) that is multiplied with x n in the MDCT and with y n in the IMDCT formulas above, in order to avoid discontinuities at the n = 0 and 2 N boundaries by making the function go smoothly to zero at those points. (That is, the window function is applied to the data before the MDCT or after the IMDCT.) In principle, x and y could have different window functions, and the window function could also change from one block to the next (especially for the case where data blocks of different sizes are combined), but for simplicity we consider the common case of identical window functions for equal-sized blocks.
The transform remains invertible (that is, TDAC works), for a symmetric window w n = w 2 N −1− n , as long as w satisfies the Princen–Bradley condition:
Various window functions are used. A window that produces a form known as a modulated lapped transform (MLT) [ 15 ] [ 16 ] is given by
and is used for MP3 and MPEG-2 AAC, and
for Vorbis. AC-3 uses a Kaiser–Bessel derived (KBD) window , and MPEG-4 AAC can also use a KBD window.
Note that windows applied to the MDCT are different from windows used for some other types of signal analysis, since they must fulfill the Princen–Bradley condition. One of the reasons for this difference is that MDCT windows are applied twice, for both the MDCT (analysis) and the IMDCT (synthesis).
As can be seen by inspection of the definitions, for even N the MDCT is essentially equivalent to a DCT-IV, where the input is shifted by N /2 and two N -blocks of data are transformed at once. By examining this equivalence more carefully, important properties like TDAC can be easily derived.
In order to define the precise relationship to the DCT-IV, one must realize that the DCT-IV corresponds to alternating even/odd boundary conditions: even at its left boundary (around n = −1/2), odd at its right boundary (around n = N − 1/2), and so on (instead of periodic boundaries as for a DFT ). This follows from the identities
and
Thus, if its inputs are an array x of length N , we can imagine extending this array to ( x , − x R , − x , x R , ...) and so on, where x R denotes x in reverse order.
Consider an MDCT with 2 N inputs and N outputs, where we divide the inputs into four blocks ( a , b , c , d ) each of size N /2. If we shift these to the right by N /2 (from the + N /2 term in the MDCT definition), then ( b , c , d ) extend past the end of the N DCT-IV inputs, so we must "fold" them back according to the boundary conditions described above.
In this way, any algorithm to compute the DCT-IV can be trivially applied to the MDCT.
Similarly, the IMDCT formula above is precisely 1/2 of the DCT-IV (which is its own inverse), where the output is extended (via the boundary conditions) to a length 2 N and shifted back to the left by N /2. The inverse DCT-IV would simply give back the inputs (− c R − d , a − b R ) from above. When this is extended via the boundary conditions and shifted, one obtains
Half of the IMDCT outputs are thus redundant, as b − a R = −( a − b R ) R , and likewise for the last two terms. If we group the input into bigger blocks A , B of size N , where A = ( a , b ) and B = ( c , d ), we can write this result in a simpler way:
One can now understand how TDAC works. Suppose that one computes the MDCT of the subsequent, 50% overlapped, 2 N block ( B , C ). The IMDCT will then yield, analogous to the above: ( B − B R , C + C R )/2. When this is added with the previous IMDCT result in the overlapping half, the reversed terms cancel and one obtains simply B , recovering the original data.
The origin of the term "time-domain aliasing cancellation" is now clear. The use of input data that extend beyond the boundaries of the logical DCT-IV causes the data to be aliased in the same way that frequencies beyond the Nyquist frequency are aliased to lower frequencies, except that this aliasing occurs in the time domain instead of the frequency domain: we cannot distinguish the contributions of a and of b R to the MDCT of ( a , b , c , d ), or equivalently, to
the result of
The combinations c − d R and so on have precisely the right signs for the combinations to cancel when they are added.
For odd N (which are rarely used in practice), N /2 is not an integer, so the MDCT is not simply a shift permutation of a DCT-IV. In this case, the additional shift by half a sample means that the MDCT/IMDCT becomes equivalent to the DCT-III/II, and the analysis is analogous to the above.
We have seen above that the MDCT of 2 N inputs ( a , b , c , d ) is equivalent to a DCT-IV of the N inputs (− c R − d , a − b R ).
The DCT-IV is designed for the case where the function at the right boundary is odd, and therefore the values near the right boundary are close to 0. If the input signal is smooth, this is the case: the rightmost components of a and b R are consecutive in the input sequence ( a , b , c , d ), and therefore their difference is small.
Let us look at the middle of the interval: if we rewrite the above expression as (− c R − d , a − b R ) = (− d , a ) − ( b , c ) R , the second term, ( b , c ) R , gives a smooth transition in the middle.
However, in the first term, (− d , a ), there is a potential discontinuity where the right end of − d meets the left end of a .
This is the reason for using a window function that reduces the components near the boundaries of the input sequence ( a , b , c , d ) towards 0.
Above, the TDAC property was proved for the ordinary MDCT, showing that adding IMDCTs of subsequent blocks in their overlapping half recovers the original data. The derivation of this inverse property for the windowed MDCT is only slightly more complicated.
Consider two overlapping consecutive sets of 2 N inputs ( A , B ) and ( B , C ), for blocks A , B , C of size N .
Recall from above that when ( A , B ) {\displaystyle (A,B)} and ( B , C ) {\displaystyle (B,C)} are MDCTed, IMDCTed, and added in their overlapping half, we obtain ( B + B R ) / 2 + ( B − B R ) / 2 = B {\displaystyle (B+B_{R})/2+(B-B_{R})/2=B} , the original data.
Now we suppose that we multiply both the MDCT inputs and the IMDCT outputs by a window function of length 2 N . As above, we assume a symmetric window function, which is therefore of the form ( W , W R ) {\displaystyle (W,W_{R})} where W is a length- N vector and R denotes reversal as before. Then the Princen-Bradley condition can be written as W 2 + W R 2 = ( 1 , 1 , … ) {\displaystyle W^{2}+W_{R}^{2}=(1,1,\ldots )} , with the squares and additions performed elementwise.
Therefore, instead of MDCTing ( A , B ) {\displaystyle (A,B)} , we now MDCT ( W A , W R B ) {\displaystyle (WA,W_{R}B)} (with all multiplications performed elementwise). When this is IMDCTed and multiplied again (elementwise) by the window function, the last- N half becomes:
(Note that we no longer have the multiplication by 1/2, because the IMDCT normalization differs by a factor of 2 in the windowed case.)
Similarly, the windowed MDCT and IMDCT of ( B , C ) {\displaystyle (B,C)} yields, in its first- N half:
When we add these two halves together, we obtain:
recovering the original data. | https://en.wikipedia.org/wiki/Modified_discrete_cosine_transform |
Some systems in fluid dynamics involve a fluid being subject to conservative body forces . Since a conservative body force is the gradient of some potential function, it has the same effect as a gradient in fluid pressure . [ 1 ] It is often convenient to define a modified pressure equal to the true fluid pressure plus the potential.
Examples of conservative body forces include gravity and the centrifugal force in a rotating reference frame.
This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Modified_pressure |
The modified triadan system is a scheme of dental nomenclature that can be used widely across different animal species . It is used worldwide among veterinary surgeons .
Each tooth is given a three digit number.
The first number relates to the quadrant of the mouth in which the tooth lies:
If it is a deciduous tooth that is being referred to, then a different number is used:
The second and third numbers refer to the location of the tooth from front to back (or rostral to caudal ). This starts at 01 and goes up to 11 for many species, depending on the total number of teeth. | https://en.wikipedia.org/wiki/Modified_triadan_system |
The Modigliani–Miller theorem (of Franco Modigliani , Merton Miller ) is an influential element of economic theory ; it forms the basis for modern thinking on capital structure . [ 1 ] [ 2 ] [ 3 ] The basic theorem states that in the absence of taxes , bankruptcy costs, agency costs , and asymmetric information , and in an efficient market , the enterprise value of a firm is unaffected by how that firm is financed. [ 4 ] [ unreliable source? ] This is not to be confused with the value of the equity of the firm. Since the value of the firm depends neither on its dividend policy nor its decision to raise capital by issuing shares or selling debt , the Modigliani–Miller theorem is often called the capital structure irrelevance principle .
The key Modigliani–Miller theorem was developed for a world without taxes. However, if we move to a world where there are taxes, when the interest on debt is tax-deductible , and ignoring other frictions, the value of the company increases in proportion to the amount of debt used. [ 5 ] The additional value equals the total discounted value of future taxes saved by issuing debt instead of equity.
Modigliani was awarded the 1985 Nobel Prize in Economics for this and other contributions.
Miller was a professor at the University of Chicago when he was awarded the 1990 Nobel Prize in Economics, along with Harry Markowitz and William F. Sharpe , for their "work in the theory of financial economics", with Miller specifically cited for "fundamental contributions to the theory of corporate finance".
Miller and Modigliani derived and published their theorem when they were both professors at the Graduate School of Industrial Administration (GSIA) of Carnegie Mellon University . Despite limited prior experience in corporate finance, Miller and Modigliani were assigned to teach the subject to current business students. Finding the published material on the topic lacking, the professors created the theorem based on their own research. [ 6 ] The result of this was the article in the American Economic Review and what has later been known as the M&M theorem.
Miller and Modigliani published a number of follow-up papers discussing some of these issues. The theorem was first proposed by F. Modigliani and M. Miller in 1958.
Consider two firms which are identical except for their financial structures. The first (Firm U) is unlevered : that is, it is financed by equity only. The other (Firm L) is levered: it is financed partly by equity, and partly by debt. The Modigliani–Miller theorem states that the enterprise value of the two firms is the same. Enterprise value encompasses claims by both creditors and shareholders, and is not to be confused with the value of the equity of the firm.
The operational justification of the theorem can be visualized using the working of arbitrage . Consider that the two firms operate in a perfect capital market: both the firms are identical in all aspects except, one of the firms employ debt in its capital structure while the other doesn't. Investors of the firm which has higher overall value can sell their stake and buy the stake in the firm whose value is lower. They will be able to earn the same return at a lower capital outlay and hence, lower perceived risk. Due to arbitrage, there would be an excess selling of the stake in the higher value firm bringing its price down, meanwhile for the lower value firm, due to the increased buying the price of its stake will rise. This corrects the market distortion, created by unequal risk amount and ultimately the value of both the firms will be leveled.
According to the MM Hypothesis, the value of levered firm can never be higher than that of the unlevered firm. The two must be equal. There is neither an advantage nor a disadvantage in using debt in a firm's capital structure.
V U = V L {\displaystyle V_{U}=V_{L}\,}
where
V U {\displaystyle V_{U}} is the value of an unlevered firm = price of buying a firm composed only of equity, and V L {\displaystyle V_{L}} is the value of a levered firm = price of buying a firm that is composed of some mix of debt and equity. Another word for levered is geared , which has the same meaning. [ 7 ]
To see why this should be true, suppose an investor is considering buying one of the two firms, U or L. Instead of purchasing the shares of the levered firm L, he could purchase the shares of firm U and borrow the same amount of money B that firm L does. The eventual returns to either of these investments would be the same. Therefore the price of L must be the same as the price of U minus the money borrowed B, which is the value of L's debt.
This discussion also clarifies the role of some of the theorem's assumptions. We have implicitly assumed that the investor 's cost of borrowing money is the same as that of the firm, which need not be true in the presence of asymmetric information, in the absence of efficient markets, or if the investor has a different risk profile than the firm.
where
A higher debt-to-equity ratio leads to a higher required return on equity, because of the higher risk involved for equity-holders in a company with debt. The formula is derived from the theory of weighted average cost of capital (WACC).
These propositions are true under the following assumptions:
These results might seem irrelevant (after all, none of the conditions are met in the real world), but the theorem is still taught and studied because it tells something very important. That is, capital structure matters precisely because one or more of these assumptions is violated. It tells where to look for determinants of optimal capital structure and how those factors might affect optimal capital structure.
where
This means that there are advantages for firms to be levered, since corporations can deduct interest payments. Therefore leverage lowers tax payments. Dividend payments are non-deductible.
where:
The same relationship as earlier described stating that the cost of equity rises with leverage, because the risk to equity rises, still holds. The formula, however, has implications for the difference with the WACC . Their second attempt on capital structure included taxes has identified that as the level of gearing increases by replacing equity with cheap debt the level of the WACC drops and an optimal capital structure does indeed exist at a point where debt is 100%.
The following assumptions are made in the propositions with taxes: | https://en.wikipedia.org/wiki/Modigliani–Miller_theorem |
Modo (stylized in all lowercase) was a wireless device developed by Scout Electromedia . Utilizing pager networks, the device was designed to provide city-specific "lifestyle" content, such as reviews of restaurants or bars and movie listings, in addition to original curated content by Scout's developers. [ 1 ]
Officially announced on August 28, 2000, targeting a "young hipster " urban demographic [ 2 ] and a reported $20 million spent on marketing, [ 3 ] the Modo was released in September 2000 in two US cities, New York and San Francisco , with plans to roll out in other major urban areas such as Los Angeles and Chicago . [ 4 ]
After not receiving additional funding [ 5 ] and the firing of one of its chief executives, Geoff Pitfield. [ 6 ] Scout Electromedia was liquidated and the Modo, along with its wireless service, was discontinued in October 2000, [ 7 ] just one month after its release and one day before its Los Angeles launch. It has been noted as one of the most notorious dot-com bubble failed ventures. [ 8 ] [ 9 ]
After the company was funded, one of its venture backers, Flatiron, backed a similar company, Vindigo, which aimed to bring a broader range of information to the PalmPilot platform. Because of Scout's focus on delivering mobile information to a young design-conscious audience that had no interest in using a traditional PDA, Vindigo was considered by the backers to be a complementary product offering. Scout Electromedia received an estimated $40 million to develop and market the Modo. [ 3 ]
The industrial design was done by IDEO (which took an investment in the startup), while the device software was based on Pixo 's operating system (the OS that later powered the Apple iPod ). All of the electrical engineering, wireless, and system development were done in-house by the company.
The Modo was advertised heavily in its target markets of Los Angeles, New York, Chicago and San Francisco, and was sold online via its website and in retailers such as DKNY and Virgin Megastores . The product was launched in the late summer of 2000 and made it to two of the four planned cities, but only shipped for one day in San Francisco. While the stock sold out, reviews of the device were mixed: while praising the device design and concept, criticisms arose due to its one-way service, its limited city availability, and comparisons to competitors Vindigo and Palm . [ 10 ] [ 11 ] [ 12 ]
On October 20, 2000, Geoff Pitfield, Scout Electromedia's CEO, was fired, [ 6 ] and on October 24, 2000, the company was shut down, stopping all developments and service on the Modo. [ 13 ] Over time, it came out that the company's venture backers had left the company to die as many of them experienced their own financial problems due to the dot-com bubble (notably Idealab, Flatiron, and Chase Capital ). [ 14 ] | https://en.wikipedia.org/wiki/Modo_(wireless_device) |
Modons or dipole eddy pairs, are eddies that can carry water over distances of more than 1000 km in the ocean, in different directions than usual sea currents like Rossby waves , and much faster than other eddies. [ 1 ]
The name modon was coined by M. E. Stern as a pun on the joint USA - USSR oceanographic research program POLYMODE. [ 2 ] The modon is a dipole-vortex solution to the potential-vorticity equation that was theorized in order to explain anomalous atmospheric blocking events and eddy structures in rotating fluids, [ 3 ] and the first solution was obtained by Stern in 1975. However, this solution was imperfect because it was not continuous at the modon boundary, so other scientists, such as Larichev and Reznik (1976), proposed other solutions that corrected that problem. [ 2 ]
Although modons were predicted theoretically in the 1970s, a pair of modons spinning in opposite directions was first identified traveling in 2017 over the Tasman Sea . The study of satellite images has allowed the identification of other modons, at least dating back to 1993, that hadn't been identified as such until then. The scientists that first discovered modons in the wild think that they can absorb small sea creatures and carry them at high speed over long ocean distances. [ 4 ] They are also capable of affecting the transport of heat, carbon and nutrients over that area of the ocean. They move about ten times faster than a typical eddy, and can last for six months before being disengaged.
In 2019, Rostami and Zeitlin [ 5 ] reported a discovery of steady, long-living, slowly eastward-moving large-scale coherent twin cyclones, so-called “equatorial modon,” by means of a moist-convective rotating shallow water model. Crudest barotropic features of MJO such as eastward propagation along the equator, slow phase speed, hydro-dynamical coherent structure, the convergent zone of moist-convection, are captured by Rostami and Zeitlin's modon. Having an exact solution of streamlines for internal and external regions of equatorial asymptotic modon is another feature of this structure. It is shown that such eastward-moving coherent dipolar structures can be produced during geostrophic adjustment of localized large-scale pressure anomalies in the diabatic moist-convective environment on the equator. [ 6 ] | https://en.wikipedia.org/wiki/Modon_(fluid_dynamics) |
The Modular Chemical Descriptor Language (MCDL) is a method for representing of molecular structures and pertinent molecular information using linear descriptors. MCDL files are designed for cross-platform transfer and manipulation of compound-specific chemical data. They consist of sets of unique information (fragments, connections) and nonunique information (coordinates, ID numbers, spectra, physical-chemical properties). The nonunique portion of the descriptor can be customized, thus providing end-user flexibility. [ 1 ] Unique representation of atom and double bond stereochemistry is contrived as separate modules. [ 2 ]
Modular Chemical Descriptor Language is currently implemented in several software packages. A JAVA-based MCDL editor with intelligent generation of 2D coordinates is available as open source software under GPL. [ 3 ] MCDL translator is also included in Open Babel starting from version 2.3.1. [ 4 ] [ 5 ]
This article about chemistry software is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Modular_Chemical_Descriptor_Language |
A Modular Product Architecture is a product design practice, using principles of modularity . In short, a Modular Product Architecture can be defined as a collection of modules with unique functions and strategies, protected by interfaces to deliver an evolving family of market-driven products.
Karl Ulrich, Professor in Mechanical Engineering, defines a Product Architecture as “(1) the arrangement of functional elements; (2) the mapping from functional elements to physical components; (3) the specification of the interfaces among interacting physical components”. [ 1 ] A Modular Product Architecture consists of interchangeable building blocks (modules) that can be combined into a variety of product variants. [ 2 ]
Assigning strategic intent to each module enables the producing company to connect its business objectives with the design of the product:. [ 3 ] [ 2 ] This can be done by the use of Module Drivers . [ 3 ] [ 2 ] The Module Drivers were first defined in 1998 by Gunnar Erixon, PhD in Design Engineering at KTH Royal Institute of Technology , and grouped into Primary and Secondary Module Drivers. [ 4 ] The Primary drivers defines the strategy of the module based on its need for development or variance, as follows [ 3 ] [ 4 ]
Using standardized interfaces between the modules enables interchangeability of different variants of modules and ensures that the Modular Product Architecture can be maintained over time. [ 3 ] [ 5 ] This enables the producing company to continuously update and improve the Modular Product Architecture and respond to changing needs in the market.
Modular Product Architectures can be developed by the use of Modular function deployment . [ 4 ] | https://en.wikipedia.org/wiki/Modular_Product_Architecture |
Modular additions are usually side and second-story additions to homes that are pre-fabricated at the facilities. General characteristics of a modular home apply. For a second-story modular addition the existing house should have a sound structure as modular rooms are 30%+ heavier than the same stick-built. Modular additions are built in the facility, brought to the site and “dropped” by crane on the new location. [ 1 ]
Even though it is assumed that modular additions are less expensive than traditional stick built this is rarely true when compared by square footage. The buyer would generally get more wood and better insulation plus benefits of controlled environment but total cost of the contract would be about the same. | https://en.wikipedia.org/wiki/Modular_addition |
Modular agile transit (MAT) is a conceptual framework for public transportation that integrates modular vehicle technology with agile operational strategies to enhance flexibility, efficiency, and responsiveness in urban and suburban transit systems. The term combines "modular," referring to vehicles composed of interchangeable units, and "agile," a principle borrowed from software development emphasizing adaptability and iterative improvement. While not yet a standardized system, MAT represents an emerging idea in transportation research to address challenges such as fluctuating demand, first- and last-mile connectivity, environmental sustainability, and the inefficient use of urban space historically dictated by traditional transit infrastructure.
MAT envisions a transit system where vehicles, often autonomous, consist of modular units or "pods" that can be dynamically assembled or disassembled to adjust capacity based on real-time passenger demand. This modularity allows smaller units to serve low-demand areas or times while larger configurations handle peak loads, reducing operational costs and improving service quality. The agile component emphasizes rapid adaptation to changing conditions—such as traffic patterns, urban events, or infrastructure issues—through data-driven decision-making and flexible routing. Unlike conventional transit systems that have shaped cities around fixed infrastructure like highways and parking lots, MAT aims to address transit's impact on urban design by reclaiming space for human-centric uses such as housing, parks, and commerce.
The concept builds on advancements in autonomous vehicle technology and modular design, as seen in research on autonomous modular buses (AMBs) and flexible transit systems . It aligns with broader trends in sustainable urban mobility, seeking to reduce reliance on private cars, lower greenhouse gas emissions , and adapt transit to evolving urban needs.
MAT systems typically incorporate the following elements:
Proponents of MAT suggest it could offer several advantages over traditional fixed-route transit:
Despite its potential, MAT faces several hurdles:
MAT draws from ongoing research into modular transit systems, with studies exploring optimization of vehicle formations and schedules using mathematical models like mixed-integer linear programming to balance operator costs and passenger needs. [ 1 ] Experiments with autonomous modular buses (AMBs) have demonstrated reduced travel times and transfer frequency in simulated urban networks. [ 4 ] Research also highlights MAT's potential to integrate with smart city frameworks, using real-time data to enhance urban management beyond mobility. [ 3 ] However, as of April 2025, no fully operational MAT system has been widely implemented, with most work remaining in the proof-of-concept stage.
MAT differs from traditional fixed-route buses by offering variable capacity and routing, unlike conventional transit's static schedules and vehicle sizes that often lock cities into outdated layouts. It contrasts with demand-responsive transport (e.g., ridesharing) by maintaining a structured, scalable network rather than a fully individualized service. Compared to transit-oriented development (TOD), which focuses on urban planning around transit hubs, MAT emphasizes operational adaptability and space reclamation within existing infrastructure, reducing the need for sprawling highways or parking zones that disrupt communities. [ 8 ]
As cities seek innovative solutions to congestion, pollution, and land-use inefficiency, MAT could play a role in future mobility ecosystems. Its integration with technologies like Mobility as a Service (MaaS) and advancements in battery efficiency may accelerate development. By reducing total vehicle miles traveled and optimizing shared mobility, MAT aligns with goals to lower urban emissions. [ 3 ] Researchers suggest pilot projects in mid-sized cities could test its viability, potentially leading to broader adoption by 2030 or beyond, with early adopters influencing urban innovation. | https://en.wikipedia.org/wiki/Modular_agile_transit |
A modular building is a prefabricated building that consists of repeated sections called modules. [ 1 ] Modularity involves constructing sections away from the building site, then delivering them to the intended site. Installation of the prefabricated sections is completed on site. Prefabricated sections are sometimes placed using a crane . The modules can be placed side-by-side, end-to-end, or stacked, allowing for a variety of configurations and styles. After placement, the modules are joined together using inter-module connections, also known as inter-connections. The inter-connections tie the individual modules together to form the overall building structure. [ 2 ]
Modular buildings may be used for long-term, temporary or permanent facilities, such as construction camps, schools and classrooms, civilian and military housing, and industrial facilities. Modular buildings are used in remote and rural areas where conventional construction may not be reasonable or possible, for example, the Halley VI accommodation pods used for a BAS Antarctic expedition. [ 3 ] Other uses have included churches, health care facilities, sales and retail offices, fast food restaurants and cruise ship construction. They can also be used in areas that have weather concerns, such as hurricanes. Modular buildings are often used to provide temporary facilities, including toilets and ablutions at events. The portability of the buildings makes them popular with hire companies and clients alike. The use of modular buildings enables events to be held at locations where existing facilities are unavailable, or unable to support the number of event attendees.
Construction is offsite, using lean manufacturing techniques to prefabricate single or multi-story buildings in deliverable module sections. Often, modules are based around standard 20 foot containers , using the same dimensions, structures, building and stacking/placing techniques, but with smooth (instead of corrugated) walls, glossy white paint, and provisions for windows, power, potable water, sewage lines, telecommunications and air conditioning. Permanent Modular Construction (PMC) buildings are manufactured in a controlled setting and can be constructed of wood, steel, or concrete. Modular components are typically constructed indoors on assembly lines . Modules' construction may take as little as ten days but more often one to three months. PMC modules can be integrated into site built projects or stand alone and can be delivered with MEP , fixtures and interior finishes.
The buildings are 60% to 90% completed offsite in a factory-controlled environment, and transported and assembled at the final building site. This can comprise the entire building or be components or subassemblies of larger structures. In many cases, modular contractors work with traditional general contractors to exploit the resources and advantages of each type of construction. Completed modules are transported to the building site and assembled by a crane. [ 4 ] Placement of the modules may take from several hours to several days. Off-site construction running in parallel to site preparation providing a shorter time to project completion is one of the common selling points of modular construction. Modular construction timeline
Permanent modular buildings are built to meet or exceed the same building codes and standards as site-built structures and the same architect-specified materials used in conventionally constructed buildings are used in modular construction projects. PMC can have as many stories as building codes allow. Unlike relocatable buildings, PMC structures are intended to remain in one location for the duration of their useful life.
The entire process of modular construction places significance on the design stage. This is where practices such as Design for Manufacture and Assembly (DfMA) are used to ensure that assembly tolerances are controlled throughout manufacture and assembly on site. It is vital that there is enough allowance in the design to allow the assembly to take up any "slack" or misalignment of components. The use of advanced CAD systems, 3D printing and manufacturing control systems are important for modular construction to be successful. This is quite unlike on-site construction where the tradesman can often make the part to suit any particular installation.
The development of factory facilities for modular homes requires significant upfront investment. To help address housing shortages in the 2010s, the United Kingdom Government (via Homes England ) invested in modular housing initiatives. Several UK companies (for example, Ilke Homes , L&G Modular Homes , House by Urban Splash , Modulous, TopHat and Lighthouse) were established to develop modular homes as an alternative to traditionally-built residences, but failed as they could not book revenues quickly enough to cover the costs of establishing manufacturing facilities.
IIke Homes opened a factory in Knaresborough , Yorkshire in 2018, and Homes England invested £30m in November 2019, [ 5 ] and a further £30m in September 2021. [ 6 ] Despite a further fund-raising round, raising £100m in December 2022, [ 7 ] [ 8 ] Ilke Homes went into administration on 30 June 2023, [ 9 ] [ 10 ] with most of the company's 1,150 staff made redundant, [ 11 ] and debts of £320m, [ 12 ] including £68m owed to Homes England. [ 13 ]
In 2015 Legal & General launched a modular homes operation, L&G Modular Homes, opening a 550,000 sq ft factory in Sherburn-in-Elmet , near Selby in Yorkshire. [ 14 ] The company incurred large losses as it invested in its factory before earning any revenues; by 2019, it had lost over £100m. [ 15 ] Sales revenues from a Selby project, plus schemes in Kent and West Sussex, started to flow in 2022, by which time the business's total losses had grown to £174m. [ 16 ] Production was halted in May 2023, with L&G blaming local planning delays and the COVID-19 pandemic for its failure to grow its sales pipeline. [ 17 ] [ 18 ] The enterprise incurred total losses over seven years of £295m. [ 19 ]
Some home buyers and some lending institutions resist consideration of modular homes as equivalent in value to site-built homes. [ citation needed ] While the homes themselves may be of equivalent quality, entrenched zoning regulations and psychological marketplace factors may create hurdles for buyers or builders of modular homes and should be considered as part of the decision-making process when exploring this type of home as a living and/or investment option. In the UK and Australia, modular homes have become accepted in some regional areas; however, they are not commonly built in major cities. Modular homes are becoming increasingly common in Japanese urban areas, due to improvements in design and quality, speed and compactness of onsite assembly, as well as due to lowering costs and ease of repair after earthquakes. Recent innovations allow modular buildings to be indistinguishable from site-built structures. [ 20 ] Surveys have shown that individuals can rarely tell the difference between a modular home and a site-built home. [ 21 ]
Differences include the building codes that govern the construction, types of material used and how they are appraised by banks for lending purposes. Modular homes are built to either local or state building codes as opposed to manufactured homes, which are also built in a factory but are governed by a federal building code. [ 22 ] The codes that govern the construction of modular homes are exactly the same codes that govern the construction of site-constructed homes. [ citation needed ] In the United States, all modular homes are constructed according to the International Building Code (IBC), IRC, BOCA or the code that has been adopted by the local jurisdiction. [ citation needed ] In some states, such as California, mobile homes must still be registered yearly, like vehicles or standard trailers, with the Department of Motor Vehicles or other state agency. This is true even if the owners remove the axles and place it on a permanent foundation. [ 23 ]
A mobile home should have a small metal tag on the outside of each section. If a tag cannot be located, details about the home can be found in the electrical panel box. This tag should also reveal a manufacturing date. [ citation needed ] Modular homes do not have metal tags on the outside but will have a dataplate installed inside the home, usually under the kitchen sink or in a closet. The dataplate will provide information such as the manufacturer, third party inspection agency, appliance information, and manufacture date.
The materials used in modular buildings are of the same quality and durability as those used in traditional construction, preserving characteristics such as acoustic insulation and energy efficiency , as well as allowing for attractive and innovative designs thanks to their versatility. [ 24 ] Most commonly used are steel, wood and concrete. [ 25 ]
Wood-frame floors, walls and roof are often utilized. Some modular homes include brick or stone exteriors, granite counters and steeply pitched roofs. Modulars can be designed to sit on a perimeter foundation or basement. In contrast, mobile homes are constructed with a steel chassis that is integral to the integrity of the floor system. Modular buildings can be custom built to a client's specifications. Current designs include multi-story units, multi-family units and entire apartment complexes. The negative stereotype commonly associated with mobile homes has prompted some manufacturers to start using the term "off-site construction."
New modular offerings include other construction methods such as cross-laminated timber frames. [ 27 ]
Mobile homes often require special lenders. [ 28 ]
Modular homes on the other hand are financed as site built homes with a construction loan
Typically, modular dwellings are built to local, state or council code, resulting in dwellings from a given manufacturing facility having differing construction standards depending on the final destination of the modules. [ 29 ] The most important zones that manufacturers have to take into consideration are local wind, heat, and snow load zones. [ citation needed ] For example, homes built for final assembly in a hurricane-prone, earthquake or flooding area may include additional bracing to meet local building codes. Steel and/or wood framing are common options for building a modular home.
Some US courts have ruled that zoning restrictions applicable to mobile homes do not apply to modular homes since modular homes are designed to have a permanent foundation. [ citation needed ] Additionally, in the US, valuation differences between modular homes and site-built homes are often negligible in real estate appraisal practice; modular homes can, in some market areas, (depending on local appraisal practices per Uniform Standards of Professional Appraisal Practice ) be evaluated the same way as site-built dwellings of similar quality. In Australia, manufactured home parks are governed by additional legislation that does not apply to permanent modular homes. Possible developments in equivalence between modular and site-built housing types for the purposes of real estate appraisals , financing and zoning may increase the sales of modular homes over time. [ 30 ]
The Consortium of Local Authorities Special Programme (abbreviated and more commonly referred to as CLASP) was formed in England in 1957 to combine the resources of local authorities with the purpose of developing a prefabricated school building programme. Initially developed by Charles Herbert Aslin, the county architect for Hertfordshire , the system was used as a model for several other counties, most notably Nottinghamshire and Derbyshire . CLASP's popularity in these coal mining areas was in part because the system permitted fairly straightforward replacement of subsidence-damaged sections of building.
Modular homes are designed to be stronger than traditional homes by, for example, replacing nails with screws, adding glue to joints, and using 8–10% more lumber than conventional housing. [ 31 ] This is to help the modules maintain their structural integrity as they are transported on trucks to the construction site. However, there are few studies on the response of modular buildings to transport and handling stresses. It is therefore presently difficult to predict transport induced damage. [ 1 ]
When FEMA studied the destruction wrought by Hurricane Andrew in Dade County Florida, they concluded that modular and masonry homes fared best compared to other construction. [ 32 ]
The CE mark is a construction norm that guarantees the user of mechanical resistance and strength of the structure. It is a label given by European community empowered authorities for end-to-end process mastering and traceability. [ citation needed ]
All manufacturing operations are being monitored and recorded:
This ID and all the details are recorded in a database , At any time, the producer has to be able to answer and provide all the information from each step of the production of a single unit, The EC certification guaranties standards in terms of durability, resistance against wind and earthquakes. [ citation needed ]
The term Modularity can be perceived in different ways. It can even be extended to building P2P (peer-to-peer) applications; where a tailored use of the P2P technology is with the aid of a modular paradigm. Here, well-understood components with clean interfaces can be combined to implement arbitrarily complex functions in the hopes of further proliferating self-organising P2P technology. Open modular buildings are an excellent example of this. Modular building can also be open source and green.
Bauwens, Kostakis and Pazaitis [ 33 ] elaborate on this kind of modularity. They link modularity to the construction of houses.
This commons-based activity is geared towards modularity. The construction of modular buildings enables a community to share designs and tools related to all the different parts of house construction. A socially-oriented endeavour that deals with the external architecture of buildings and the internal dynamics of open source commons. People are thus provided with the tools to reconfigure the public sphere in the area where they live, especially in urban environments. There is a robust socializing element that is reminiscent of pre-industrial vernacular architecture and community-based building. [ 34 ]
Some organisations already provide modular housing. Such organisations are relevant as they allow for the online sharing of construction plans and tools. These plans can be then assembled, through either digital fabrication like 3D printing or even sourcing low-cost materials from local communities. It has been noticed that given how easy it is to use these low-cost materials are (for example: plywood), it can help increase the permeation of these open buildings to areas or communities that lack the know-how or abilities of conventional architectural or construction firms. Ergo, it allows for a fundamentally more standardised way of constructing houses and buildings. The overarching idea behind it remains key - to allow for easy access to user-friendly layouts which anyone can use to build in a more sustainable and affordable way.
Modularity in this sense is building a house from different standardised parts, like solving a jigsaw puzzle.
3D printing can be used to build the house.
The main standard is OpenStructures and its derivative Autarkytecture . [ 35 ]
Modular construction is the subject of continued research and development worldwide as the technology is applied to taller and taller buildings. Research and development is carried out by modular building companies and also research institutes such as the Modular Building Institute [ 36 ] and the Steel Construction Institute. [ 37 ]
34 - " Volumetric modular construction trend gaining groun d". https://www.aa.com.tr/en/corporate-news/volumetric-modular-construction-trend-gaining-ground/2357158 06.09.2021 | https://en.wikipedia.org/wiki/Modular_building |
Modular construction is a construction technique which involves the prefabrication of 2D panels or 3D volumetric structures in off-site factories and transportation to construction sites for assembly. This process has the potential to be superior to traditional building in terms of both time and costs, with claimed time savings of between 20 and 50 percent faster than traditional building techniques. [ 1 ]
It is estimated that by 2030, modular construction could deliver US$22 billion in annual cost savings for the US and European construction industry, helping fill the US$1.6 trillion productivity gap. [ 1 ] The current need for standardized, repeatable 3D volumetric housing pre-fabricated units and designs for student accommodations, affordable housing and hotels is driving demand for modular construction.
In a 2018 Practice Note , the NEC states that the benefits obtained from offsite construction mainly relate to the creation of components in a factory setting, protected from the weather and using manufacturing techniques such as assembly lines with dedicated and specialist equipment. [ 2 ] Through the use of appropriate technology, modular construction can:
In contrast to the benefits mentioned earlier, modular construction presents two significant obstacles: [ 3 ]
Modular construction has consistently been at least 20 percent faster than traditional on-site builds. [ citation needed ] Currently, the design process of modular construction projects tends to take longer than that of traditional building. This is because modular construction is a fairly new technology and not many architects and engineers have experience working with it. In fewer words, the industry has not yet learned how to work this way. It is expected of design firms to develop module libraries which would assist in the automation of this process. These modules libraries would hold various pre-designed 2D panels and 3D structures which would be digitally assembled to create standardized structures.
The foundations of a structure are a crucial part of its rigidity. The magnitude and complexity of such will vary depending on the size, and overall weight of the structure. Therefore, the weight difference of a traditionally built house and a prefabricated structure will mean that foundations needed will be smaller and faster to build.
Off-site manufacturing is the pinnacle of modular construction. The ability to coordinate and repeat activities in a factory along with the increased help of automation result in largely faster manufacturing times than those of on-site building. A large time saver is the ability to parallelly work on the foundation of a structure and the manufacturing of the structure itself. This would be impossible with traditional construction. The on-site construction is radically simplified. The assembly of pre-fabricated components is as simple as assembling the 3D modules, and connecting the services to main site connections. A team of five workers can assemble up to six 3D modules, or the equivalent of 270 square meters of finished floor area, in a single work day.
Since the technology required to manufacture the components of modular construction, the prefabricated parts of modular buildings are carried out by modular factories. To optimize time, modular factories consider the specifications and resources of the project and adapt a scheduling algorithm to fulfill the needs of this unique project. However, current scheduling methods assume the quantity of resources will never reach zero, therefore representing an unrealistic work cycle.
A modular factory handling a single project at any given point is rare, and would produce low returns. Hyun and Lee's research propose a Genetic Algorithm (GA) scheduling model which takes into consideration various project's characteristics and shares resources. [ 4 ] The production sequence of this algorithm would be largely affected by which modules need to be transported to which site and the dates they should arrive. After considering the variables of production, transportation and on-site assembly the objective function is: m i n Σ ( S ) , … S = S i + P i − E i {\displaystyle min\Sigma (S),\ldots S=S_{i}+P_{i}-E_{i}} Where S i is the number of stocked units per day, P i is the number of units per day and E i is number of units installed per day. Production algorithms are continuously being developed to further accelerate the production of modular construction buildings, enlarging the time saving gap with traditional construction methods.
Modular construction can yield up to 20 percent of the total project cost in savings. However, there is also a risk of it increasing the cost by 10 percent. [ citation needed ] This occurs when the savings in the labor area of construction are outweighed by the increase in costs of the logistics area and materials. The pre-fabrication of components used in modular construction have a higher logistics cost than traditional building. Since the panels or 3D structures have to be manufactured in a factory and transported to the construction site, new variables which alter the flow of construction are presented.
Transportation of fabricated components is naturally more expensive than that of raw materials. For one, even a number of 2D panels stacked together are much harder to transport than the raw cement, wood or material used to build them. Panels run a high risk of suffering minor or major damage when being transported through land. If a panel were to be damaged, it would likely have to be replaced entirely. The factory would need to temporarily stop production of other panels to replace this one, increasing the overall manufacturing hours and therefore cost. On top of the manufacturing hours, the transportation hours would also be increased, increasing yet another cost. Regardless, the transportation of 2D panels is still a good alternative to on-site construction.
Transportation reaches its peak cost when shipping 3D volumetric structures. While 1 m 2 of 2D floor space takes approximately US$8 to transport 250 km, its equivalent in 3D floor space takes US$45. [ 1 ] Adding to this the replacement cost if the structure gets damaged during transport creates a large cost increase.
Assembling components in a factory off-site means that workers can use the repeatability of the structures as well as the use of automation to facilitate the manufacturing process. By standardizing the overall design of structures, work which would usually require expensive workers with specific skills (e.g. mechanical, electrical and plumbing) can be completed by low-cost manufacturers, decreasing the total salaries cost. As very little manufacturing occurs on-site, up to 80% of traditional labor activity can be moved off-site to the module factory. This leads to a lower number of sub-contractors needed, further decreasing overall total salaries cost. Overall, the larger the labor-intensive portion of a project, the larger the savings will be if modular construction is used.
Project such as student accommodations, hotels and affordable housing are great candidates for modular construction. The repeatability of their structures leads to faster manufacturing times and therefore less overall cost. Meanwhile, if the project is (for example) a modern beach house with highly irregular wall spaces and ceilings, traditional construction methods may be preferable. As the industry continues to adapt and grow, these repeatable designs could one day be modified and adapted to fit all kinds of structures at decreased costs.
Construction is considered to be one of the most dangerous industries. Workers fall from heights, objects are dropped, muscles are strained and environmental hazards can be found. Modular construction constrains all manufacturing activities to a ground level, clean space with fewer workers needed. It is estimated that reportable accidents are reduced by over 80% relative to site-intensive construction. [ 5 ] When asked in a survey about safety management in the construction industry conducted by McGraw Hill Construction in 2013, 50% of the construction industry believed that pre-fabrication was safer than traditional on-site building, while only 4% said that prefabrication or modular construction had a negative impact on safety performance. Of the general and specialty contractors surveyed, 78% and 59% said that the largest safety impact was the undergoing of complex tasks at ground level. [ 6 ] According to the CDC, falling is the leading cause of work-related fatalities in construction, making up more than one in every three deaths in the industry. [ 7 ] The reduction of heights at which workers need perform tasks on subsequently reduces the fatality risk they experience, greatly increasing the overall safety of the industry. Also, 69% of the general contractors as well as 69% of the specialty contractors mentioned that the reduced number of workers performing different tasks at the off-site factory also improved construction site safety. Overall, modular construction is safer for the following reasons:
Modular construction is still not considered an entirely safe alternative. However, it does reduce accidents and fatalities by a significant amount. Especially in the manufacturing process of a project. 48.1% of all accidents during on-site construction were fall-related, while only 9.1% of the accidents at manufacturing plants were from falls. [ 6 ] Manufacturing plant workers were more likely to be struck by an object or equipment (37.1%) and fracture and amputation had the same injury type frequency at 27.3%. Nevertheless, as the construction industry continues to adapt and moves over to more sustainable construction methods like pre-fabricated modular construction, it is expected that the overall safety number of accidents at construction sites will decrease.
The use of modular construction methods is encouraged by proponents of Prevention through Design techniques in construction. It is included as a recommended hazard control for construction projects in the "PtD - Architectural Design and Construction Education Module" published by the National Institute for Occupational Safety and Health . [ 7 ]
Modular construction is a great alternative to traditional construction when looking at the amount of waste each method produces. When constructing a high-rise building in Wolverhampton, 824 modules were used. During this process about 5% of the total weight of the construction was produced as waste. If it is compared to traditional methods' 10–13% average waste, a small difference can be observed. [ 5 ] This difference may not seem like much when talking about small structures; however, when talking about a 100,000 lb/ft 2 building, it is a significant percentage. Also, the number of on-site deliveries decreased by up to 70%. [ 5 ] The deliveries are instead moved to the modular factory, where more material can be received. On-site noise pollution is greatly reduced as well. By moving the manufacturing process to an off-site factory, usually located outside of the city, neighboring buildings are not impacted as they would be with the traditional building process.
Open-source and commercial hardware components used in modular construction include: open beams, bit beams, maker beams, grid beams, contraptors, OpenStructures components, etc. [ 8 ] [ 9 ] Space frame systems (such as Mero, Unistrut, Delta Structures, etc.) also tend to be modular in design. [ 10 ] Other materials used in construction which are interlocking and thus reusable/modular in nature include interlocking bricks. [ 11 ] [ 12 ] [ 13 ] | https://en.wikipedia.org/wiki/Modular_construction |
Modular design , or modularity in design, is a design principle that subdivides a system into smaller parts called modules (such as modular process skids), which can be independently created, modified, replaced, or exchanged with other modules or between different systems.
A modular design can be characterized by functional partitioning into discrete scalable and reusable modules, rigorous use of well-defined modular interfaces, and making use of industry standards for interfaces. In this context modularity is at the component level, and has a single dimension, component slottability. A modular system with this limited modularity is generally known as a platform system that uses modular components. Examples are car platforms or the USB port in computer engineering platforms.
In design theory this is distinct from a modular system which has higher dimensional modularity and degrees of freedom. A modular system design has no distinct lifetime and exhibits flexibility in at least three dimensions. In this respect modular systems are very rare in markets. Mero architectural systems are the closest example to a modular system in terms of hard products in markets. Weapons platforms, especially in aerospace, tend to be modular systems, wherein the airframe is designed to be upgraded multiple times during its lifetime, without the purchase of a completely new system. Modularity is best defined by the dimensions effected or the degrees of freedom in form, cost, or operation.
Modularity offers benefits such as reduction in cost (customization can be limited to a portion of the system, rather than needing an overhaul of the entire system), interoperability, shorter learning time, flexibility in design, non-generationally constrained augmentation or updating (adding new solution by merely plugging in a new module), and exclusion. Modularity in platform systems, offer benefits in returning margins to scale, reduced product development cost, reduced O&M costs, and time to market. Platform systems have enabled the wide use of system design in markets and the ability for product companies to separate the rate of the product cycle from the R&D paths. The biggest drawback with modular systems is the designer or engineer. Most designers are poorly trained in systems analysis and most engineers are poorly trained in design. The design complexity of a modular system is significantly higher than a platform system and requires experts in design and product strategy during the conception phase of system development. That phase must anticipate the directions and levels of flexibility necessary in the system to deliver the modular benefits. Modular systems could be viewed as more complete or holistic design whereas platforms systems are more reductionist, limiting modularity to components. Complete or holistic modular design requires a much higher level of design skill and sophistication than the more common platform system.
Cars , computers , process systems , solar panels , wind turbines , elevators , furniture , looms , railroad signaling systems, telephone exchanges , pipe organs , synthesizers , electric power distribution systems and modular buildings are examples of platform systems using various levels of component modularity. For example, one cannot assemble a solar cube from extant solar components or easily replace the engine on a truck or rearrange a modular housing unit into a different configuration after a few years, as would be the case in a modular system. These key characteristics make modular furniture incredibly versatile and adaptable. [ 1 ] The only extant examples of modular systems in today's market are some software systems that have shifted away from versioning into a completely networked paradigm.
Modular design inherently combines the mass production advantages of standardization with those of customization . The degree of modularity, dimensionally, determines the degree of customization possible. For example, solar panel systems have 2-dimensional modularity which allows adjustment of an array in the x and y dimensions. Further dimensions of modularity would be introduced by making the panel itself and its auxiliary systems modular. Dimensions in modular systems are defined as the effected parameter such as shape or cost or lifecycle. Mero systems have 4-dimensional modularity, x, y, z, and structural load capacity. As can be seen in any modern convention space, the space frame's extra two dimensions of modularity allows far greater flexibility in form and function than solar's 2-d modularity. If modularity is properly defined and conceived in the design strategy, modular systems can create significant competitive advantage in markets. A true modular system does not need to rely on product cycles to adapt its functionality to the current market state. Properly designed modular systems also introduce the economic advantage of not carrying dead capacity, increasing the capacity utilization rate and its effect on cost and pricing flexibility.
Aspects of modular design can be seen in cars or other vehicles to the extent of there being certain parts to the car that can be added or removed without altering the rest of the car.
A simple example of modular design in cars is the fact that, while many cars come as a basic model, paying extra will allow for "snap in" upgrades such as a more powerful engine, vehicle audio , ventilated seats , or seasonal tires; these do not require any change to other units of the car such as the chassis , steering, electric motor or battery systems.
Modular design can be seen in certain buildings. Modular buildings (and also modular homes) generally consist of universal parts (or modules) that are manufactured in a factory and then shipped to a build site where they are assembled into a variety of arrangements. [ 2 ]
Modular buildings can be added to or reduced in size by adding or removing certain components. This can be done without altering larger portions of the building. Modular buildings can also undergo changes in functionality using the same process of adding or removing components.
For example, an office building can be built using modular parts such as walls, frames, doors, ceilings, and windows. The interior can then be partitioned (or divided) with more walls and furnished with desks, computers, and whatever else is needed for a functioning workspace. If the office needs to be expanded or redivided to accommodate employees, modular components such as wall panels can be added or relocated to make the necessary changes without altering the whole building. Later, this same office can be broken down and rearranged to form a retail space, conference hall or another type of building, using the same modular components that originally formed the office building. The new building can then be refurnished with whatever items are needed to carry out its desired functions.
Other types of modular buildings that are offered from a company like Allied Modular include a guardhouse , machine enclosure, press box , conference room , two-story building, clean room and many more applications. [ 3 ]
Many misconceptions are held regarding modular buildings. [ 4 ] In reality modular construction is a viable method of construction for quick turnaround and fast growing companies. Industries that would benefit from this include healthcare, commercial, retail, military, and multi-family/student housing.
Modular design in computer hardware is the same as in other things (e.g. cars, refrigerators, and furniture). The idea is to build computers with easily replaceable parts that use standardized interfaces . This technique allows a user to upgrade certain aspects of the computer easily without having to buy another computer altogether.
A computer is one of the best examples of modular design. Typical computer modules include a computer chassis , power supply units , processors , mainboards , graphics cards , hard drives , and optical drives . All of these parts should be easily interchangeable as long as the user uses parts that support the same standard interface.
The idea of a modular smartphone was explored in Project Ara , which provided a platform for manufactures to create modules for a smartphone which could then be customised by the end user. The Fairphone uses a similar principle, where the user can purchase individual parts to repair or upgrade the phone.
In 1963 Motorola introduced the first rectangular color picture tube, and in 1967 introduced the modular Quasar brand. In 1964 it opened its first research and development branch outside of the United States, in Israel under the management of Moses Basin. In 1974 Motorola sold its television business to the Japan-based Matsushita, the parent company of Panasonic .
Some firearms and weaponry use a modular design to make maintenance and operation easier and more familiar. For instance, German firearms manufacturer Heckler & Koch produces several weapons that, while being different types, are visually and, in many instances, internally similar. These are the G3 battle rifle , HK21 general-purpose machine gun , MP5 submachine gun , HK33 and G41 assault rifles , and PSG1 sniper rifle .
The concept of modular design has become popular with trade show exhibits and retail promotional displays . These kind of promotional displays involve creative custom designs but need a temporary structure that can be reusable. Thus many companies are adapting to the Modular way of exhibit design. In this they can use pre engineered modular systems that act as building blocks to creative a custom design. These can then be reconfigured to another layout and reused for a future show. This enables the user to reduce cost of manufacturing and labor (for set up and transport) and is a more sustainable way of creating experiential set ups.
Product lifecycle management is a strategy for efficiently managing information about a product (and product families, platforms, modules, and parts) during its product lifecycle . [ 5 ] Researchers have described how integrating a digital twin —a digital representation of a physical product—with modular design can improve product lifecycle management. [ 6 ] [ 7 ]
Some authors observe that modular design has generated in the vehicle industry a constant increase of weight over time. Trancossi advanced the hypothesis that modular design can be coupled by some optimization criteria derived from the constructal law . [ 8 ] In fact, the constructal law is modular for his nature and can apply with interesting results in engineering simple systems. [ 9 ] It applies with a typical bottom-up optimization schema:
A better formulation has been produced during the MAAT EU FP7 Project. [ 10 ] A new design method that couples the above bottom-up optimization with a preliminary system level top-down design has been formulated. [ 11 ] The two step design process has been motivated by considering that constructal and modular design does not refer to any objective to be reached in the design process. A theoretical formulation has been provided in a recent paper, [ 8 ] and applied with success to the design of a small aircraft, [ 12 ] the conceptual design of innovative commuter aircraft, [ 13 ] [ 14 ] the design of a new entropic wall, [ 15 ] and an innovative off-road vehicle designed for energy efficiency . [ 16 ] | https://en.wikipedia.org/wiki/Modular_design |
In mathematics , a modular equation is an algebraic equation satisfied by moduli , [ 1 ] in the sense of moduli problems . That is, given a number of functions on a moduli space , a modular equation is an equation holding between them, or in other words an identity for moduli.
The most frequent use of the term modular equation is in relation to the moduli problem for elliptic curves . In that case the moduli space itself is of dimension one. That implies that any two rational functions F and G , in the function field of the modular curve, will satisfy a modular equation P ( F , G ) = 0 with P a non-zero polynomial of two variables over the complex numbers . For suitable non-degenerate choice of F and G , the equation P ( X , Y ) = 0 will actually define the modular curve.
This can be qualified by saying that P , in the worst case, will be of high degree and the plane curve it defines will have singular points ; and the coefficients of P may be very large numbers. Further, the 'cusps' of the moduli problem, which are the points of the modular curve not corresponding to honest elliptic curves but degenerate cases, may be difficult to read off from knowledge of P .
In that sense a modular equation becomes the equation of a modular curve . Such equations first arose in the theory of multiplication of elliptic functions (geometrically, the n 2 -fold covering map from a 2- torus to itself given by the mapping x → n · x on the underlying group) expressed in terms of complex analysis .
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Modular_equation |
Modular Function Deployment (MFD) is a method for creating modular product architectures, based on research performed at KTH Royal Institute of Technology in the 1990s. [ 1 ] As a result of said research, the company Modular Management was registered in 1996, offering consultancy services centered on the MFD method. [ 2 ]
With a modular product architecture, companies can offer a wide range of products and services without increasing complexity, since modules and module variants, like blocks of LEGO , can be configured in many different ways. The MFD method ensures that each module has functional, strategic and customer-centric value and can be combined with other modules through standardized interfaces. [ 3 ] [ 4 ] A modular product architecture can enable mass customization , where customers configure and order personalized—rather than ready-made—products and services. [ 5 ]
MFD consists of five steps and is often illustrated as a circle to emphasize that it is an iterative process. [ 3 ] | https://en.wikipedia.org/wiki/Modular_function_deployment |
In theoretical physics , modular invariance is the invariance under the group such as SL(2,Z) of large diffeomorphisms of the torus . The name comes from the classical name modular group of this group, as in modular form theory.
In string theory , modular invariance is an additional requirement for one-loop diagrams . This helps in getting rid of some global anomalies such as the gravitational anomalies .
Equivalently, in two-dimensional conformal field theory the torus partition function must be invariant under the modular group SL(2,Z) .
This article about theoretical physics is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Modular_invariance |
A modular process skid is a process system contained within a frame that allows the process system to be easily transported ( skid mount ). Individual skids can contain complete process systems and multiple process skids can be combined to create larger process systems or entire portable plants. They are sometimes called “a system in a box.” An example of a multi-skid process system might include a raw materials skid, a utilities skid and a processing unit which work in tandem.
Process skids are considered an alternative to traditional stick-built construction where process system parts are shipped individually and installed incrementally at the manufacturing site. [ 1 ] They provide the advantage of parallel construction, where process systems are built off-site in a fabrication facility while civil site upgrades are completed at the plant site simultaneously. [ 2 ] Skids are not always appropriate. If individual process parts are large and cannot reasonably be contained within the frame of a modular process skid, traditional construction methods are preferred.
Process skids are designed to contain a complete process system, a complete unit of operations or to organize a manufacturing process into logical units. [ 3 ] All skids have the following characteristics in common:
Modular process skids typically contain the following equipment: | https://en.wikipedia.org/wiki/Modular_process_skid |
A modular vehicle is one in which substantial components of the vehicle are interchangeable. This modularity is intended to make repairs and maintenance easier or to allow the vehicle to be reconfigured to suit different functions.
Another application of modular vehicle design is to enable the exchange of batteries in an electric vehicle.
In a modular electric vehicle, the power system, wheels, and suspension can be contained in a single module or chassis. When the batteries need recharging, the vehicle's body is lifted off and placed onto a fresh power module. By using this Modular Vehicle system, the vehicle's batteries do not have to be removed or reinstalled, and their connections remain intact.
The world's first road-licensed quick-change modular electric vehicle, based on a patent awarded to Dr Gordon E Dower in 2000, [ 1 ] was shown at the World Electric Vehicle Association 2003 Electric Vehicle Symposium EVS-20 in Long Beach, California, USA. [ citation needed ]
Dower described the vehicle's two parts as its motorized deck, shortened to Modek , and its "containing module" or Ridon . When attached to each other, the vehicle thus formed was dubbed the Ridek . Mechanical connections between the modules for braking and steering automatically engage when the body is lowered onto the chassis. [ citation needed ]
In 2004, General Motors attempted to patent a modular vehicle [ 2 ] called Autonomy , [ citation needed ] but the attempt was unsuccessful because Dower's patent already existed. [ 3 ]
A team at GM did, however, continue to work on Autonomy, which was intended to be powered by a hydrogen fuel cell. They unveiled a non-drivable version of their modular vehicle in January 2002 at the Detroit Auto Show. [ 4 ] GM unveiled a drivable prototype, called Hy-wire at the Paris Auto Show in September 2002. [ 4 ] The name referred to the Hydrogen fuel and the " Drive by wire " system that electronically connected the vehicle modules for steering, braking, and controlling the four wheel motors. Hy-wire did not go into production. [ citation needed ]
In the 2010s, a number of modular platforms were developed by car manufacturers. Geely Auto developed the Compact Modular Architecture platform (2017), B-segment Modular Architecture platform (2018), and Sustainable Experience Architecture platform (2021). PSA Group and Dongfeng developed the Common Modular Platform (2018)
Modular vehicles make it possible to use different types of bodies, e.g., sedan, sports car, or pickup truck, on one standardized chassis. [ 5 ]
Also, the modular chassis, with its batteries and motor, is relatively easy to work on since there is no vehicle body to impede access. | https://en.wikipedia.org/wiki/Modular_vehicle |
Modularity is the degree to which a system 's components may be separated and recombined, often with the benefit of flexibility and variety in use. [ 1 ] The concept of modularity is used primarily to reduce complexity by breaking a system into varying degrees of interdependence and independence across and "hide the complexity of each part behind an abstraction and interface". [ 2 ] However, the concept of modularity can be extended to multiple disciplines, each with their own nuances. Despite these nuances, consistent themes concerning modular systems can be identified. [ 3 ]
Composability is one of the tenets of functional programming . This makes functional programs modular. [ 4 ]
The meaning of the word "modularity" can vary somewhat based on context. The following are contextual examples of modularity across several fields of science, technology, industry, and culture:
The term modularity is widely used in studies of technological and organizational systems. Product systems are deemed "modular", for example, when they can be decomposed into a number of components that may be mixed and matched in a variety of configurations. [ 7 ] [ 8 ] The components are able to connect, interact, or exchange resources (such as energy or data) in some way, by adhering to a standardized interface. Unlike a tightly integrated product whereby each component is designed to work specifically (and often exclusively) with other particular components in a tightly coupled system, modular products are systems of components that are " loosely coupled ." [ 9 ]
In The Language of New Media , Lev Manovich proposes five "principles of new media"—to be understood "not as absolute laws but rather as general tendencies of a culture undergoing computerization." [ 10 ] The five principles are numerical representation, modularity, automation, variability, and transcoding. Modularity within new media represents new media as being composed of several separate self-sufficient modules that can act independently or together in synchronisation to complete the new media object. In Photoshop , modularity is most evident in layers; a single image can be composed of many layers, each of which can be treated as an entirely independent and separate entity. Websites can be defined as being modular, their structure is formed in a format that allows their contents to be changed, removed or edited whilst still retaining the structure of the website. This is because the website's content operates separately to the website and does not define the structure of the site. The entire Web , Manovich notes, has a modular structure, composed of independent sites and pages, and each webpage itself is composed of elements and code that can be independently modified. [ 11 ]
Organizational systems are said to become increasingly modular when they begin to substitute loosely coupled forms for tightly integrated, hierarchical structures. [ 12 ] For instance, when the firm utilizes contract manufacturing rather than in-house manufacturing, it is using an organizational component that is more independent than building such capabilities in-house: the firm can switch between contract manufacturers that perform different functions, and the contract manufacturer can similarly work for different firms. [ 12 ] As firms in a given industry begin to substitute loose coupling with organizational components that lie outside of firm boundaries for activities that were once conducted in-house, the entire production system (which may encompass many firms) becomes increasingly modular. The firms themselves become more specialized components. Using loosely coupled structures enables firms to achieve greater flexibility in both scope and scale. [ 12 ] This is in line with modularity in the processes of production, which relates to the way that technological artifacts are produced. This consists of the artifact's entire value chain, from the designing of the artifact to the manufacturing and distribution stages. In production, modularity is often due to increased design modularity. [ 13 ] The firm can switch easily between different providers of these activities (e.g., between different contract manufacturers or alliance partners) compared to building the capabilities for all activities in house, thus responding to different market needs more quickly. However, these flexibility gains come with a price. Therefore, the organization must assess the flexibility gains achievable, and any accompanying loss of performance, with each of these forms.
Modularization within firms leads to the disaggregation of the traditional form of hierarchical governance. [ 14 ] [ 15 ] [ 16 ] The firm is decomposed into relatively small autonomous organizational units (modules) to reduce complexity. Modularization leads to a structure, in which the modules integrate strongly interdependent tasks, while the interdependencies between the modules are weak. In this connection the dissemination of modular organizational forms has been facilitated by the widespread efforts of the majority of large firms to re-engineer, refocus and restructure. These efforts usually involve a strong process-orientation: the complete service-provision process of the business is split up into partial processes, which can then be handled autonomously by cross-functional teams within organizational units (modules). The co-ordination of the modules is often carried out by using internal market mechanisms, in particular by the implementation of profit centers . Overall, modularization enables more flexible and quicker reaction to changing general or market conditions. Building on the above principles, many alternative forms of modularization of organizations (for-profit or non-profit) are possible. [ 13 ] [ 17 ] However, modularization is not an independent and self-contained organizational concept, but rather consists of several basic ideas, which are integral parts of other organizational concepts. These central ideas can be found in every firm. Accordingly, it is not sensible to characterize a firm as "modular" or as "not modular", because firms are always modular to a some degree.
Input systems, or "domain specific computational mechanisms" (such as the ability to perceive spoken language) are termed vertical faculties, and according to Jerry Fodor they are modular in that they possess a number of characteristics Fodor argues constitute modularity. Fodor's list of features characterizing modules includes the following:
Fodor does not argue that this is formal definition or an all-inclusive list of features necessary for modularity. He argues only that cognitive systems characterized by some of the features above are likely to be characterized by them all, and that such systems can be considered modular. He also notes that the characteristics are not an all-or-nothing proposition, but rather each of the characteristics may be manifest in some degree, and that modularity itself is also not a dichotomous construct—something may be more or less modular: "One would thus expect—what anyhow seems to be desirable—that the notion of modularity ought to admit of degrees" (Fodor, 1996 [1983]:37).
Notably, Fodor's "not assembled" feature contrasts sharply with the use of modularity in other fields in which modular systems are seen to be hierarchically nested (that is, modules are themselves composed of modules, which in turn are composed of modules, etc.) However, Max Coltheart notes that Fodor's commitment to the non-assembled feature appears weak, [ 18 ] and other scholars (e.g., Block [ 19 ] ) have proposed that Fodor's modules could be decomposed into finer modules. For instance, while Fodor distinguishes between separate modules for spoken and written language, Block might further decompose the spoken language module into modules for phonetic analysis and lexical forms: [ 18 ] "Decomposition stops when all the components are primitive processors—because the operation of a primitive processor cannot be further decomposed into suboperations" [ 19 ]
Though Fodor's work on modularity is one of the most extensive, there is other work in psychology on modularity worth noting for its symmetry with modularity in other disciplines. For instance, while Fodor focused on cognitive input systems as modules, Coltheart proposes that there may be many different kinds of cognitive modules, and distinguishes between, for example, knowledge modules and processing modules. The former is a body of knowledge that is independent of other bodies of knowledge, while the latter is a mental information-processing system independent from other such systems.
However, the data neuroscientists have accumulated have not pointed to an organization system as neat and precise as the modularity theory originally proposed originally by Jerry Fodor. It has been shown to be much messier and different from person to person, even though general patterns exist; through a mixture of neuroimaging and lesion studies, it has been shown that there are certain regions that perform certain functions and other regions that do not perform those functions. [ 20 ]
As in some of the other disciplines, the term modularity may be used in multiple ways in biology. For example, it may refer to organisms that have an indeterminate structure wherein modules of various complexity (e.g., leaves, twigs) may be assembled without strict limits on their number or placement. Many plants and sessile (immobile) invertebrates of the benthic zones demonstrate this type of modularity (by contrast, many other organisms have a determinate structure that is predefined in embryogenesis ). [ 21 ] The term has also been used in a broader sense in biology to refer to the reuse of homologous structures across individuals and species. Even within this latter category, there may be differences in how a module is perceived. For instance, evolutionary biologists may focus on the module as a morphological component (subunit) of a whole organism, while developmental biologists may use the term module to refer to some combination of lower-level components (e.g., genes ) that are able to act in a unified way to perform a function. [ 22 ] In the former, the module is perceived a basic component, while in the latter the emphasis is on the module as a collective.
Biology scholars have provided a list of features that should characterize a module (much as Fodor did in The Modularity of Mind [ 23 ] ). For instance, Rudy Raff [ 24 ] provides the following list of characteristics that developmental modules should possess:
To Raff's mind, developmental modules are "dynamic entities representing localized processes (as in morphogenetic fields) rather than simply incipient structures ... (... such as organ rudiments)". [ 24 ] : 326 Bolker, however, attempts to construct a definitional list of characteristics that is more abstract, and thus more suited to multiple levels of study in biology. She argues that:
Another stream of research on modularity in biology that should be of particular interest to scholars in other disciplines is that of Günter Wagner and Lee Altenberg . Altenberg's work, [ 27 ] Wagner's work, [ 28 ] and their joint writing [ 29 ] explores how natural selection may have resulted in modular organisms, and the roles modularity plays in evolution. Altenberg's and Wagner's work suggests that modularity is both the result of evolution, and facilitates evolution—an idea that shares a marked resemblance to work on modularity in technological and organizational domains.
The use of modules in the fine arts has a long pedigree among diverse cultures. In the classical architecture of Greco-Roman antiquity, the module was utilized as a standardized unit of measurement for proportioning the elements of a building. Typically the module was established as one-half the diameter of the lower shaft of a classical column; all the other components in the syntax of the classical system were expressed as a fraction or multiple of that module. In traditional Japanese construction, room sizes were often determined by combinations of standard rice mats called tatami ; the standard dimension of a mat was around 3 feet by 6 feet, which approximate the overall proportions of a reclining human figure. The module thus becomes not only a proportional device for use with three-dimensional vertical elements but a two-dimensional planning tool as well.
Modularity as a means of measurement is intrinsic to certain types of building; for example, brick construction is by its nature modular insofar as the fixed dimensions of a brick necessarily yield dimensions that are multiples of the original unit. Attaching bricks to one another to form walls and surfaces also reflects a second definition of modularity: namely, the use of standardized units that physically connect to each other to form larger compositions.
With the advent of modernism and advanced construction techniques in the 20th century this latter definition transforms modularity from a compositional attribute to a thematic concern in its own right. A school of modular constructivism develops in the 1950s among a circle of sculptors who create sculpture and architectural features out of repetitive units cast in concrete. A decade later modularity becomes an autonomous artistic concern of its own, as several important Minimalist artists adopt it as their central theme. Modular building as both an industrial production model and an object of advanced architectural investigation develops from this same period.
Modularity has found renewed interest among proponents of ModulArt , a form of modular art in which the constituent parts can be physically reconfigured, removed and/or added to. After a few isolated experiments in ModulArt starting in the 1950s, [ 30 ] several artists since the 1990s have explored this flexible, customizable and co-creative form of art. [ 31 ]
Modularity in fashion is the ability to customise garments through adding and removing elements or altering the silhouette, usually via zips, hook and eye closures or other fastenings. Throughout history it has been used to tailor garments, existing even in the 17th century . In recent years, an increasing number of fashion designers – especially those focused on slow or sustainable fashion – are experimenting with this concept. Within the realm of Haute Couture , Yohji Yamamoto and Hussein Chalayan are notable examples, the latter especially for his use of technology to create modular garments.
Studies carried out in Finland and the US show favourable attitudes of consumers to modular fashion, [ 32 ] despite this the concept has not yet made it into mainstream fashion. The current emphasis within modular fashion is on the co-designing and customisation factors for consumers, with a goal to combat the swift changes to customers needs and wants, while also tackling sustainability by increasing the life-cycle of garments. [ 33 ]
Modularity is a concept that has been thoroughly used in architecture and industry. In interior design modularity is used in order to achieve customizable products that are economically viable. Examples include some of the customizable creations of IKEA and mostly high-end high-cost concepts. Modularity in interior design, or "modularity in use", [ 13 ] refers to the opportunities of combinations and reconfigurations of the modules in order to create an artefact that suits the specific needs of the user and simultaneously grows with them. The evolution of 3D printing technology has enabled customizable furniture to become feasible. Objects can be prototyped, changed depending on the space and customized dependent on the users needs. Designers can prototype showcase their modules over the internet just by using 3D printing technology. Sofas are a common piece that have modular utilities ranging from ottoman to a bed, as well as fabrics and textiles that are swappable. [ 34 ] This originated in the 1940s after being invented by Harvey Probber , was refined in the 1970s, and reaching mass scale consumerism in the 2010s and 2020s. [ 35 ]
In John Blair's Modular America , [ 36 ] he argues that as Americans began to replace social structures inherited from Europe (predominantly England and France), they evolved a uniquely American tendency towards modularity in fields as diverse as education, music, and architecture.
Blair observes that when the word module first emerged in the sixteenth and seventeenth centuries, it meant something very close to model . It implied a small-scale representation or example. By the eighteenth and nineteenth centuries, the word had come to imply a standard measure of fixed ratios and proportions. For example, in architecture, the proportions of a column could be stated in modules (i.e., "a height of fourteen modules equaled seven times the diameter measured at the base" [ 36 ] : 2 ) and thus multiplied to any size while still retaining the desired proportions.
However, in America, the meaning and usage of the word shifted considerably: "Starting with architectural terminology in the 1930s, the new emphasis was on any entity or system designed in terms of modules as subcomponents. As applications broadened after World War II to furniture, hi-fi equipment, computer programs and beyond, modular construction came to refer to any whole made up of self-contained units designed to be equivalent parts of a system, hence, we might say, "systemically equivalent." Modular parts are implicitly interchangeable and/or recombinable in one or another of several senses". [ 36 ] : 3
Blair defines a modular system as "one that gives more importance to parts than to wholes. Parts are conceived as equivalent and hence, in one or more senses, interchangeable and/or cumulative and/or recombinable" (p. 125). Blair describes the emergence of modular structures in education (the college curriculum), industry (modular product assembly), architecture (skyscrapers), music (blues and jazz), and more. In his concluding chapter, Blair does not commit to a firm view of what causes Americans to pursue more modular structures in the diverse domains in which it has appeared; but he does suggest that it may in some way be related to the American ideology of liberal individualism and a preference for anti-hierarchical organization.
Comparing the use of modularity across disciplines reveals several themes:
One theme that shows up in psychology and biology study is innately specified. Innately specified (as used here) implies that the purpose or structure of the module is predetermined by some biological mandate.
Domain specificity , that modules respond only to inputs of a specific class (or perform functions only of a specific class) is a theme that clearly spans psychology and biology, and it can be argued that it also spans technological and organizational systems. Domain specificity would be seen in the latter disciplines as specialization of function.
Hierarchically nested is a theme that recurs in most disciplines. Though originally disavowed by Jerry Fodor , other psychologists have embraced it, and it is readily apparent in the use of modularity in biology (e.g., each module of an organism can be decomposed into finer modules), social processes and artifacts (e.g., we can think of a skyscraper in terms of blocks of floors, a single floor, elements of a floor, etc.), mathematics (e.g., the modulus 6 may be further divided into the moduli 1, 2 and 3), and technological and organizational systems (e.g., an organization may be composed of divisions, which are composed of teams, which are composed of individuals). [ 37 ]
Greater internal than external integration is a theme that showed up in every discipline but mathematics. Often referred to as autonomy, this theme acknowledged that there may be interaction or integration between modules, but the greater interaction and integration occurs within the module. This theme is very closely related to information encapsulation , which shows up explicitly in both the psychology and technology research.
Near decomposability (as termed by Simon, 1962) shows up in all of the disciplines, but is manifest in a matter of degrees. For instance, in psychology and biology it may refer merely to the ability to delineate one module from another (recognizing the boundaries of the module). In several of the social artifacts, mathematics, and technological or organizational systems, however, it refers to the ability to actually separate components from one another. In several of the disciplines this decomposability also enables the complexity of a system (or process) to be reduced. This is aptly captured in a quote from David Marr [ 38 ] about psychological processes where he notes that, "any large computation should be split up into a collection of small, nearly independent, specialized subprocesses." Reducing complexity is also the express purpose of casting out nines in mathematics.
Substitutability and recombinability are closely related constructs. The former refers to the ability to substitute one component for another as in John Blair's "systemic equivalence" while the latter may refer both to the indeterminate form of the system and the indeterminate use of the component. In US college curricula, for example, each course is designed with a credit system that ensures a uniform number of contact hours, and approximately uniform educational content, yielding substitutability. By virtue of their substitutability, each student may create their own curricula (recombinability of the curriculum as a system) and each course may be said to be recombinable with a variety of students' curricula (recombinability of the component within multiple systems). Both substitutability and recombinability are immediately recognizable in Blair's social processes and artifacts, and are also well captured in Garud and Kumaraswamy's [ 39 ] discussion of economies of substitution in technological systems. [ 40 ]
Blair's systemic equivalence also demonstrates the relationship between substitutability and the module as a homologue . Blair's systemic equivalence refers to the ability for multiple modules to perform approximately the same function within a system, while in biology a module as a homologue refers to different modules sharing approximately the same form or function in different organisms. The extreme of the module as homologue is found in mathematics, where (in the simplest case) the modules refer to the reuse of a particular number and thus each module is exactly alike. [ 40 ]
In all but mathematics, there has been an emphasis that modules may be different in kind. In Fodor's discussion of modular cognitive system, each module performs a unique task. In biology, even modules that are considered homologous may be somewhat different in form and function (e.g., a whale's fin versus a human's hand). In Blair's book, he points out that while jazz music may be composed of structural units that conform to the same underlying rules, those components vary significantly. Similarly in studies of technology and organization, modular systems may be composed of modules that are very similar (as in shelving units that may be piled one atop the other) or very different (as in a stereo system where each component performs unique functions) or any combination in between. [ 40 ]
Research articles | https://en.wikipedia.org/wiki/Modularity |
Modularity refers to the ability of a system to organize discrete, individual units that can overall increase the efficiency of network activity and, in a biological sense, facilitates selective forces upon the network. Modularity is observed in all model systems, and can be studied at nearly every scale of biological organization, from molecular interactions all the way up to the whole organism .
The exact evolutionary origins of biological modularity has been debated since the 1990s. In the mid 1990s, Günter Wagner [ 1 ] argued that modularity could have arisen and been maintained through the interaction of four evolutionary modes of action:
[1] Selection for the rate of adaptation : If different complexes evolve at different rates, then those evolving more quickly reach fixation in a population faster than other complexes. Thus, common evolutionary rates could be forcing the genes for certain proteins to evolve together while preventing other genes from being co-opted unless there is a shift in evolutionary rate.
[2] Constructional selection: When a gene exists in many duplicated copies, it may be maintained because of the many connections it has (also termed pleiotropy ). There is evidence that this is so following whole genome duplication, or duplication at a single locus. However, the direct relationship that duplication processes have with modularity has yet to be directly examined.
[3] Stabilizing selection : While seeming antithetical to forming novel modules, Wagner maintains that it is important to consider the effects of stabilizing selection as it may be "an important counter force against the evolution of modularity". Stabilizing selection, if ubiquitously spread across the network, could then be a "wall" that makes the formation of novel interactions more difficult and maintains previously established interactions. Against such strong positive selection, other evolutionary forces acting on the network must exist, with gaps of relaxed selection, to allow focused reorganization to occur.
[4] Compounded effect of stabilizing and directional selection : This is the explanation seemingly favored by Wagner and his contemporaries as it provides a model through which modularity is constricted, but still able to unidirectionally explore different evolutionary outcomes. The semi-antagonistic relationship is best illustrated using the corridor model, whereby stabilizing selection forms barriers in phenotype space that only allow the system to move towards the optimum along a single path. This allows directional selection to act and inch the system closer to optimum through this evolutionary corridor.
For over a decade, researchers examined the dynamics of selection on network modularity. However, in 2013 Clune and colleagues [ 2 ] challenged the sole focus on selective forces, and instead provided evidence that there are inherent "connectivity costs" that limit the number of connections between nodes to maximize efficiency of transmission. This hypothesis originated from neurological studies that found that there is an inverse relationship between the number of neural connections and the overall efficiency (more connections seemed to limit the overall performance speed/precision of the network). This connectivity cost had yet to be applied to evolutionary analyses. Clune et al. created a series of models that compared the efficiency of various evolved network topologies in an environment where performance, their only metric for selection, was taken into account, and another treatment where performance as well as the connectivity cost were factored together. The results show not only that modularity formed ubiquitously in the models that factored in connection cost, but that these models also outperformed the performance-only based counterparts in every task. This suggests a potential model for module evolution whereby modules form from a system’s tendency to resist maximizing connections to create more efficient and compartmentalized network topologies. | https://en.wikipedia.org/wiki/Modularity_(biology) |
The modulated complex lapped transform (MCLT) is a lapped transform , similar to the modified discrete cosine transform , that explicitly represents the phase (complex values) of the signal.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Modulated_complex_lapped_transform |
The modulation error ratio or MER is a measure used to quantify the performance of a digital radio (or digital TV) transmitter or receiver in a communications system using digital modulation (such as QAM ). A signal sent by an ideal transmitter or received by a receiver would have all constellation points precisely at the ideal locations, however various imperfections in the implementation (such as noise , low image rejection ratio , phase noise , carrier suppression , distortion , etc.) or signal path cause the actual constellation points to deviate from the ideal locations.
Transmitter MER can be measured by specialized equipment, which demodulates the received signal in a similar way to how a real radio demodulator does it. Demodulated and detected signal can be used as a reasonably reliable estimate for the ideal transmitted signal in MER calculation.
An error vector is a vector in the I-Q plane between the ideal constellation point and the point received by the receiver. The Euclidean distance between the two points is its magnitude.
The modulation error ratio is equal to the ratio of the root mean square (RMS) power (in Watts) of the reference vector to the power (in Watts) of the error. It is defined in dB as:
where P error is the RMS power of the error vector, and P signal is the RMS power of ideal transmitted signal.
MER is defined as a percentage in a compatible (but reciprocal) way:
with the same definitions.
MER is closely related to error vector magnitude (EVM), but MER is calculated from the average power of the signal. MER is also closely related to signal-to-noise ratio . MER includes all imperfections including deterministic amplitude imbalance , quadrature error and distortion , while noise is random by nature. | https://en.wikipedia.org/wiki/Modulation_error_ratio |
A modulation transformer is an audio-frequency transformer that forms a major part of most AM transmitters. The primary winding of a modulation transformer is fed by an audio amplifier that has about 1/2 of the rated input power of the transmitter's final amplifier stage. The secondary winding is in series with the power supply of that final radio-frequency amplifier stage, thereby allowing the audio signal to lower and raise the instantaneous DC supply voltage of the power amplifier (PA) tube or transistor. Considering that the PA device is operated as a class-C amplifier, i.e. as a switch, the modulation transformer is responsible for the amplitude modulation (AM) of the transmitter. There Is only one system of modulation that can be used without the modulation transformer with high power. "This is loop modulation"
This electronics-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Modulation_transformer |
In algebra , a module homomorphism is a function between modules that preserves the module structures. Explicitly, if M and N are left modules over a ring R , then a function f : M → N {\displaystyle f:M\to N} is called an R - module homomorphism or an R - linear map if for any x , y in M and r in R ,
In other words, f is a group homomorphism (for the underlying additive groups) that commutes with scalar multiplication . If M , N are right R -modules, then the second condition is replaced with
The preimage of the zero element under f is called the kernel of f . The set of all module homomorphisms from M to N is denoted by Hom R ( M , N ) {\displaystyle \operatorname {Hom} _{R}(M,N)} . It is an abelian group (under pointwise addition) but is not necessarily a module unless R is commutative .
The composition of module homomorphisms is again a module homomorphism, and the identity map on a module is a module homomorphism. Thus, all the (say left) modules together with all the module homomorphisms between them form the category of modules .
A module homomorphism is called a module isomorphism if it admits an inverse homomorphism; in particular, it is a bijection . Conversely, one can show a bijective module homomorphism is an isomorphism; i.e., the inverse is a module homomorphism. In particular, a module homomorphism is an isomorphism if and only if it is an isomorphism between the underlying abelian groups.
The isomorphism theorems hold for module homomorphisms.
A module homomorphism from a module M to itself is called an endomorphism and an isomorphism from M to itself an automorphism . One writes End R ( M ) = Hom R ( M , M ) {\displaystyle \operatorname {End} _{R}(M)=\operatorname {Hom} _{R}(M,M)} for the set of all endomorphisms of a module M . It is not only an abelian group but is also a ring with multiplication given by function composition, called the endomorphism ring of M . The group of units of this ring is the automorphism group of M .
Schur's lemma says that a homomorphism between simple modules (modules with no non-trivial submodules ) must be either zero or an isomorphism. In particular, the endomorphism ring of a simple module is a division ring .
In the language of the category theory , an injective homomorphism is also called a monomorphism and a surjective homomorphism an epimorphism .
In short, Hom inherits a ring action that was not used up to form Hom. More precise, let M , N be left R -modules. Suppose M has a right action of a ring S that commutes with the R -action; i.e., M is an ( R , S )-module. Then
has the structure of a left S -module defined by: for s in S and x in M ,
It is well-defined (i.e., s ⋅ f {\displaystyle s\cdot f} is R -linear) since
and s ⋅ f {\displaystyle s\cdot f} is a ring action since
Note: the above verification would "fail" if one used the left R -action in place of the right S -action. In this sense, Hom is often said to "use up" the R -action.
Similarly, if M is a left R -module and N is an ( R , S )-module, then Hom R ( M , N ) {\displaystyle \operatorname {Hom} _{R}(M,N)} is a right S -module by ( f ⋅ s ) ( x ) = f ( x ) s {\displaystyle (f\cdot s)(x)=f(x)s} .
The relationship between matrices and linear transformations in linear algebra generalizes in a natural way to module homomorphisms between free modules. Precisely, given a right R -module U , there is the canonical isomorphism of the abelian groups
obtained by viewing U ⊕ n {\displaystyle U^{\oplus n}} consisting of column vectors and then writing f as an m × n matrix. In particular, viewing R as a right R -module and using End R ( R ) ≃ R {\displaystyle \operatorname {End} _{R}(R)\simeq R} , one has
which turns out to be a ring isomorphism (as a composition corresponds to a matrix multiplication ).
Note the above isomorphism is canonical; no choice is involved. On the other hand, if one is given a module homomorphism between finite-rank free modules , then a choice of an ordered basis corresponds to a choice of an isomorphism F ≃ R n {\displaystyle F\simeq R^{n}} . The above procedure then gives the matrix representation with respect to such choices of the bases. For more general modules, matrix representations may either lack uniqueness or not exist.
In practice, one often defines a module homomorphism by specifying its values on a generating set . More precisely, let M and N be left R -modules. Suppose a subset S generates M ; i.e., there is a surjection F → M {\displaystyle F\to M} with a free module F with a basis indexed by S and kernel K (i.e., one has a free presentation ). Then to give a module homomorphism M → N {\displaystyle M\to N} is to give a module homomorphism F → N {\displaystyle F\to N} that kills K (i.e., maps K to zero).
If f : M → N {\displaystyle f:M\to N} and g : M ′ → N ′ {\displaystyle g:M'\to N'} are module homomorphisms, then their direct sum is
and their tensor product is
Let f : M → N {\displaystyle f:M\to N} be a module homomorphism between left modules. The graph Γ f of f is the submodule of M ⊕ N given by
which is the image of the module homomorphism M → M ⊕ N , x → ( x , f ( x )), called the graph morphism .
The transpose of f is
If f is an isomorphism, then the transpose of the inverse of f is called the contragredient of f .
Consider a sequence of module homomorphisms
Such a sequence is called a chain complex (or often just complex) if each composition is zero; i.e., f i ∘ f i + 1 = 0 {\displaystyle f_{i}\circ f_{i+1}=0} or equivalently the image of f i + 1 {\displaystyle f_{i+1}} is contained in the kernel of f i {\displaystyle f_{i}} . (If the numbers increase instead of decrease, then it is called a cochain complex; e.g., de Rham complex .) A chain complex is called an exact sequence if im ( f i + 1 ) = ker ( f i ) {\displaystyle \operatorname {im} (f_{i+1})=\operatorname {ker} (f_{i})} . A special case of an exact sequence is a short exact sequence:
where f {\displaystyle f} is injective, the kernel of g {\displaystyle g} is the image of f {\displaystyle f} and g {\displaystyle g} is surjective.
Any module homomorphism f : M → N {\displaystyle f:M\to N} defines an exact sequence
where K {\displaystyle K} is the kernel of f {\displaystyle f} , and C {\displaystyle C} is the cokernel , that is the quotient of N {\displaystyle N} by the image of f {\displaystyle f} .
In the case of modules over a commutative ring , a sequence is exact if and only if it is exact at all the maximal ideals ; that is all sequences
are exact, where the subscript m {\displaystyle {\mathfrak {m}}} means the localization at a maximal ideal m {\displaystyle {\mathfrak {m}}} .
If f : M → B , g : N → B {\displaystyle f:M\to B,g:N\to B} are module homomorphisms, then they are said to form a fiber square (or pullback square ), denoted by M × B N , if it fits into
where ϕ ( x , y ) = f ( x ) − g ( x ) {\displaystyle \phi (x,y)=f(x)-g(x)} .
Example: Let B ⊂ A {\displaystyle B\subset A} be commutative rings, and let I be the annihilator of the quotient B -module A / B (which is an ideal of A ). Then canonical maps A → A / I , B / I → A / I {\displaystyle A\to A/I,B/I\to A/I} form a fiber square with B = A × A / I B / I . {\displaystyle B=A\times _{A/I}B/I.}
Let ϕ : M → M {\displaystyle \phi :M\to M} be an endomorphism between finitely generated R -modules for a commutative ring R . Then
See also: Herbrand quotient (which can be defined for any endomorphism with some finiteness conditions.)
An additive relation M → N {\displaystyle M\to N} from a module M to a module N is a submodule of M ⊕ N . {\displaystyle M\oplus N.} [ 3 ] In other words, it is a " many-valued " homomorphism defined on some submodule of M . The inverse f − 1 {\displaystyle f^{-1}} of f is the submodule { ( y , x ) | ( x , y ) ∈ f } {\displaystyle \{(y,x)|(x,y)\in f\}} . Any additive relation f determines a homomorphism from a submodule of M to a quotient of N
where D ( f ) {\displaystyle D(f)} consists of all elements x in M such that ( x , y ) belongs to f for some y in N .
A transgression that arises from a spectral sequence is an example of an additive relation. | https://en.wikipedia.org/wiki/Module_homomorphism |
In algebra, a module spectrum is a spectrum with an action of a ring spectrum ; it generalizes a module in abstract algebra.
The ∞-category of (say right) module spectra is stable ; hence, it can be considered as either analog or generalization of the derived category of modules over a ring.
Lurie defines the K-theory of a ring spectrum R to be the K-theory of the ∞-category of perfect modules over R (a perfect module being defined as a compact object in the ∞-category of module spectra.)
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Module_spectrum |
In algebraic geometry, the moduli stack of formal group laws is a stack classifying formal group laws and isomorphisms between them. It is denoted by M FG {\displaystyle {\mathcal {M}}_{\text{FG}}} . It is a "geometric object" that underlies the chromatic approach to the stable homotopy theory , a branch of algebraic topology .
Currently, it is not known whether M FG {\displaystyle {\mathcal {M}}_{\text{FG}}} is a derived stack or not. Hence, it is typical to work with stratifications. Let M FG n {\displaystyle {\mathcal {M}}_{\text{FG}}^{n}} be given so that M FG n ( R ) {\displaystyle {\mathcal {M}}_{\text{FG}}^{n}(R)} consists of formal group laws over R of height exactly n . They form a stratification of the moduli stack M FG {\displaystyle {\mathcal {M}}_{\text{FG}}} . Spec F p ¯ → M FG n {\displaystyle \operatorname {Spec} {\overline {\mathbb {F} _{p}}}\to {\mathcal {M}}_{\text{FG}}^{n}} is faithfully flat . In fact, M FG n {\displaystyle {\mathcal {M}}_{\text{FG}}^{n}} is of the form Spec F p ¯ / Aut ( F p ¯ , f ) {\displaystyle \operatorname {Spec} {\overline {\mathbb {F} _{p}}}/\operatorname {Aut} ({\overline {\mathbb {F} _{p}}},f)} where Aut ( F p ¯ , f ) {\displaystyle \operatorname {Aut} ({\overline {\mathbb {F} _{p}}},f)} is a profinite group called the Morava stabilizer group . The Lubin–Tate theory describes how the strata M FG n {\displaystyle {\mathcal {M}}_{\text{FG}}^{n}} fit together.
This topology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Moduli_stack_of_formal_group_laws |
In computing and mathematics , the modulo operation returns the remainder or signed remainder of a division , after one number is divided by another, the latter being called the modulus of the operation.
Given two positive numbers a and n , a modulo n (often abbreviated as a mod n ) is the remainder of the Euclidean division of a by n , where a is the dividend and n is the divisor . [ 1 ]
For example, the expression "5 mod 2" evaluates to 1, because 5 divided by 2 has a quotient of 2 and a remainder of 1, while "9 mod 3" would evaluate to 0, because 9 divided by 3 has a quotient of 3 and a remainder of 0.
Although typically performed with a and n both being integers , many computing systems now allow other types of numeric operands. The range of values for an integer modulo operation of n is 0 to n − 1 . a mod 1 is always 0.
When exactly one of a or n is negative, the basic definition breaks down, and programming languages differ in how these values are defined.
In mathematics , the result of the modulo operation is an equivalence class , and any member of the class may be chosen as representative ; however, the usual representative is the least positive residue , the smallest non-negative integer that belongs to that class (i.e., the remainder of the Euclidean division ). [ 2 ] However, other conventions are possible. Computers and calculators have various ways of storing and representing numbers; thus their definition of the modulo operation depends on the programming language or the underlying hardware .
In nearly all computing systems, the quotient q and the remainder r of a divided by n satisfy the following conditions:
This still leaves a sign ambiguity if the remainder is non-zero: two possible choices for the remainder occur, one negative and the other positive; that choice determines which of the two consecutive quotients must be used to satisfy equation (1). In number theory, the positive remainder is always chosen, but in computing, programming languages choose depending on the language and the signs of a or n . [ a ] Standard Pascal and ALGOL 68 , for example, give a positive remainder (or 0) even for negative divisors, and some programming languages, such as C90, leave it to the implementation when either of n or a is negative (see the table under § In programming languages for details). Some systems leave a modulo 0 undefined, though others define it as a .
Many implementations use truncated division , for which the quotient is defined by
where trunc {\displaystyle \operatorname {trunc} } is the integral part function ( rounding toward zero ), i.e. the truncation to zero significant digits.
Thus according to equation ( 1 ), the remainder has the same sign as the dividend a so can take 2| n | − 1 values:
Donald Knuth [ 3 ] promotes floored division , for which the quotient is defined by
where ⌊ ⌋ {\displaystyle \lfloor \,\rfloor } is the floor function ( rounding down ).
Thus according to equation ( 1 ), the remainder has the same sign as the divisor n :
Raymond T. Boute [ 4 ] promotes Euclidean division , for which the quotient is defined by
where sgn is the sign function , ⌊ ⌋ {\displaystyle \lfloor \,\rfloor } is the floor function ( rounding down ), and ⌈ ⌉ {\displaystyle \lceil \,\rceil } is the ceiling function ( rounding up ).
Thus according to equation ( 1 ), the remainder is non negative :
Common Lisp and IEEE 754 use rounded division , for which the quotient is defined by
where round is the round function ( rounding half to even ).
Thus according to equation ( 1 ), the remainder falls between − n 2 {\displaystyle -{\frac {n}{2}}} and n 2 {\displaystyle {\frac {n}{2}}} , and its sign depends on which side of zero it falls to be within these boundaries:
Common Lisp also uses ceiling division , for which the quotient is defined by
where ⌈⌉ is the ceiling function ( rounding up ).
Thus according to equation ( 1 ), the remainder has the opposite sign of that of the divisor :
If both the dividend and divisor are positive, then the truncated, floored, and Euclidean definitions agree.
If the dividend is positive and the divisor is negative, then the truncated and Euclidean definitions agree.
If the dividend is negative and the divisor is positive, then the floored and Euclidean definitions agree.
If both the dividend and divisor are negative, then the truncated and floored definitions agree.
As described by Leijen,
Boute argues that Euclidean division is superior to the other ones in terms of regularity and useful mathematical properties, although floored division, promoted by Knuth, is also a good definition. Despite its widespread use, truncated division is shown to be inferior to the other definitions.
However, truncated division satisfies the identity ( − a ) / b = − ( a / b ) = a / ( − b ) {\displaystyle ({-a})/b={-(a/b)}=a/({-b})} . [ 6 ] [ 7 ]
Some calculators have a mod() function button, and many programming languages have a similar function, expressed as mod( a , n ) , for example. Some also support expressions that use "%", "mod", or "Mod" as a modulo or remainder operator , such as a % n or a mod n .
For environments lacking a similar function, any of the three definitions above can be used.
When the result of a modulo operation has the sign of the dividend (truncated definition), it can lead to surprising mistakes.
For example, to test if an integer is odd , one might be inclined to test if the remainder by 2 is equal to 1:
But in a language where modulo has the sign of the dividend, that is incorrect, because when n (the dividend) is negative and odd, n mod 2 returns −1, and the function returns false.
One correct alternative is to test that the remainder is not 0 (because remainder 0 is the same regardless of the signs):
Or with the binary arithmetic:
Modulo operations might be implemented such that a division with a remainder is calculated each time. For special cases, on some hardware, faster alternatives exist. For example, the modulo of powers of 2 can alternatively be expressed as a bitwise AND operation (assuming x is a positive integer, or using a non-truncating definition):
Examples:
In devices and software that implement bitwise operations more efficiently than modulo, these alternative forms can result in faster calculations. [ 8 ]
Compiler optimizations may recognize expressions of the form expression % constant where constant is a power of two and automatically implement them as expression & (constant-1) , allowing the programmer to write clearer code without compromising performance. This simple optimization is not possible for languages in which the result of the modulo operation has the sign of the dividend (including C ), unless the dividend is of an unsigned integer type. This is because, if the dividend is negative, the modulo will be negative, whereas expression & (constant-1) will always be positive. For these languages, the equivalence x % 2 n == x < 0 ? x | ~(2 n - 1) : x & (2 n - 1) has to be used instead, expressed using bitwise OR, NOT and AND operations.
Optimizations for general constant-modulus operations also exist by calculating the division first using the constant-divisor optimization .
Some modulo operations can be factored or expanded similarly to other mathematical operations. This may be useful in cryptography proofs, such as the Diffie–Hellman key exchange . The properties involving multiplication, division, and exponentiation generally require that a and n are integers.
In addition, many computer systems provide a divmod functionality, which produces the quotient and the remainder at the same time. Examples include the x86 architecture 's IDIV instruction, the C programming language's div() function, and Python 's divmod() function.
Sometimes it is useful for the result of a modulo n to lie not between 0 and n − 1 , but between some number d and d + n − 1 . In that case, d is called an offset and d = 1 is particularly common.
There does not seem to be a standard notation for this operation, so let us tentatively use a mod d n . We thus have the following definition: [ 61 ] x = a mod d n just in case d ≤ x ≤ d + n − 1 and x mod n = a mod n . Clearly, the usual modulo operation corresponds to zero offset: a mod n = a mod 0 n .
The operation of modulo with offset is related to the floor function as follows:
To see this, let x = a − n ⌊ a − d n ⌋ {\textstyle x=a-n\left\lfloor {\frac {a-d}{n}}\right\rfloor } . We first show that x mod n = a mod n . It is in general true that ( a + bn ) mod n = a mod n for all integers b ; thus, this is true also in the particular case when b = − ⌊ a − d n ⌋ {\textstyle b=-\!\left\lfloor {\frac {a-d}{n}}\right\rfloor } ; but that means that x mod n = ( a − n ⌊ a − d n ⌋ ) mod n = a mod n {\textstyle x{\bmod {n}}=\left(a-n\left\lfloor {\frac {a-d}{n}}\right\rfloor \right)\!{\bmod {n}}=a{\bmod {n}}} , which is what we wanted to prove. It remains to be shown that d ≤ x ≤ d + n − 1 . Let k and r be the integers such that a − d = kn + r with 0 ≤ r ≤ n − 1 (see Euclidean division ). Then ⌊ a − d n ⌋ = k {\textstyle \left\lfloor {\frac {a-d}{n}}\right\rfloor =k} , thus x = a − n ⌊ a − d n ⌋ = a − n k = d + r {\textstyle x=a-n\left\lfloor {\frac {a-d}{n}}\right\rfloor =a-nk=d+r} . Now take 0 ≤ r ≤ n − 1 and add d to both sides, obtaining d ≤ d + r ≤ d + n − 1 . But we've seen that x = d + r , so we are done.
The modulo with offset a mod d n is implemented in Mathematica as Mod[a, n, d] . [ 61 ]
Despite the mathematical elegance of Knuth's floored division and Euclidean division, it is generally much more common to find a truncated division-based modulo in programming languages. Leijen provides the following algorithms for calculating the two divisions given a truncated integer division: [ 5 ]
For both cases, the remainder can be calculated independently of the quotient, but not vice versa. The operations are combined here to save screen space, as the logical branches are the same. | https://en.wikipedia.org/wiki/Modulo |
Modulo- N code is a lossy compression algorithm used to compress correlated data sources using modular arithmetic .
When applied to two nodes in a network whose data are in close range of each other modulo- N code requires one node (say odd) to send the coded data value as the raw data M o = D o {\displaystyle M_{o}=D_{o}} ; the even node is required to send the coded data as the M e = D e mod N {\displaystyle M_{e}=D_{e}{\bmod {N}}} . Hence the name modulo- N code.
Since at least log 2 K {\displaystyle \log _{2}K} bits are required to represent a number K in binary, the modulo coded data of the two nodes requires log 2 M o + log 2 M e {\displaystyle \log _{2}M_{o}+\log _{2}M_{e}} bits. As we can generally expect log 2 M e ≤ log 2 M o {\displaystyle \log _{2}M_{e}\leq \log _{2}M_{o}} always, because M e ≤ N {\displaystyle M_{e}\leq N} . This is how compression is achieved.
A compression ratio achieved is C.R. = log 2 M o + log 2 M e 2 log 2 M o . {\displaystyle {\text{C.R.}}={\frac {\log _{2}M_{o}+\log _{2}M_{e}}{2\log _{2}M_{o}}}.}
At the receiver, by joint decoding, we may complete the process of extracting the data and rebuilding the original values. The code from the even node is reconstructed by the assumption that it must be close to the data from the odd node. Hence the decoding algorithm retrieves even node data as
The decoder essentially finds the closest match to M o ≃ N . k + M e {\displaystyle M_{o}\simeq N.k+M_{e}} and the decoded value is declared as N . k + M e {\displaystyle N.k+M_{e}}
For a mod-8 code, we have Encoder
Decoder
Modulo- N decoding is similar to phase unwrapping and has the same limitation: If the difference from one node to the next is more than N /2 (if the phase changes from one sample to the next more than π {\displaystyle \pi } ), then decoding leads to an incorrect value. | https://en.wikipedia.org/wiki/Modulo-N_code |
In propositional logic , modus tollens ( / ˈ m oʊ d ə s ˈ t ɒ l ɛ n z / ) ( MT ), also known as modus tollendo tollens ( Latin for "mode that by denying denies") [ 2 ] and denying the consequent , [ 3 ] is a deductive argument form and a rule of inference . Modus tollens is a mixed hypothetical syllogism that takes the form of "If P , then Q . Not Q . Therefore, not P ." It is an application of the general truth that if a statement is true, then so is its contrapositive . The form shows that inference from P implies Q to the negation of Q implies the negation of P is a valid argument.
The history of the inference rule modus tollens goes back to antiquity . [ 4 ] The first to explicitly describe the argument form modus tollens was Theophrastus . [ 5 ]
Modus tollens is closely related to modus ponens . There are two similar, but invalid, forms of argument : affirming the consequent and denying the antecedent . See also contraposition and proof by contrapositive .
The form of a modus tollens argument is a mixed hypothetical syllogism , with two premises and a conclusion:
The first premise is a conditional ("if-then") claim, such as P implies Q . The second premise is an assertion that Q , the consequent of the conditional claim, is not the case. From these two premises it can be logically concluded that P , the antecedent of the conditional claim, is also not the case.
For example:
Supposing that the premises are both true (the dog will bark if it detects an intruder, and does indeed not bark), it logically follows that no intruder has been detected. This is a valid argument since it is not possible for the conclusion to be false if the premises are true. (It is conceivable that there may have been an intruder that the dog did not detect, but that does not invalidate the argument; the first premise is "if the dog detects an intruder". The thing of importance is that the dog detects or does not detect an intruder, not whether there is one.)
Example 1:
Example 2:
Every use of modus tollens can be converted to a use of modus ponens and one use of transposition to the premise which is a material implication. For example:
Likewise, every use of modus ponens can be converted to a use of modus tollens and transposition.
The modus tollens rule can be stated formally as:
where P → Q {\displaystyle P\to Q} stands for the statement "P implies Q". ¬ Q {\displaystyle \neg Q} stands for "it is not the case that Q" (or in brief "not Q"). Then, whenever " P → Q {\displaystyle P\to Q} " and " ¬ Q {\displaystyle \neg Q} " each appear by themselves as a line of a proof , then " ¬ P {\displaystyle \neg P} " can validly be placed on a subsequent line.
The modus tollens rule may be written in sequent notation:
where ⊢ {\displaystyle \vdash } is a metalogical symbol meaning that ¬ P {\displaystyle \neg P} is a syntactic consequence of P → Q {\displaystyle P\to Q} and ¬ Q {\displaystyle \neg Q} in some logical system ;
or as the statement of a functional tautology or theorem of propositional logic:
where P {\displaystyle P} and Q {\displaystyle Q} are propositions expressed in some formal system ;
or including assumptions:
though since the rule does not change the set of assumptions, this is not strictly necessary.
More complex rewritings involving modus tollens are often seen, for instance in set theory :
("P is a subset of Q. x is not in Q. Therefore, x is not in P.")
Also in first-order predicate logic :
("For all x if x is P then x is Q. y is not Q. Therefore, y is not P.")
Strictly speaking these are not instances of modus tollens , but they may be derived from modus tollens using a few extra steps.
The validity of modus tollens can be clearly demonstrated through a truth table .
In instances of modus tollens we assume as premises that p → q is true and q is false. There is only one line of the truth table—the fourth line—which satisfies these two conditions. In this line, p is false. Therefore, in every instance in which p → q is true and q is false, p must also be false.
Modus tollens represents an instance of the law of total probability combined with Bayes' theorem expressed as:
Pr ( P ) = Pr ( P ∣ Q ) Pr ( Q ) + Pr ( P ∣ ¬ Q ) Pr ( ¬ Q ) , {\displaystyle \Pr(P)=\Pr(P\mid Q)\Pr(Q)+\Pr(P\mid \lnot Q)\Pr(\lnot Q)\,,}
where the conditionals Pr ( P ∣ Q ) {\displaystyle \Pr(P\mid Q)} and Pr ( P ∣ ¬ Q ) {\displaystyle \Pr(P\mid \lnot Q)} are obtained with (the extended form of) Bayes' theorem expressed as:
Pr ( P ∣ Q ) = Pr ( Q ∣ P ) a ( P ) Pr ( Q ∣ P ) a ( P ) + Pr ( Q ∣ ¬ P ) a ( ¬ P ) {\displaystyle \Pr(P\mid Q)={\frac {\Pr(Q\mid P)\,a(P)}{\Pr(Q\mid P)\,a(P)+\Pr(Q\mid \lnot P)\,a(\lnot P)}}\;\;\;} and Pr ( P ∣ ¬ Q ) = Pr ( ¬ Q ∣ P ) a ( P ) Pr ( ¬ Q ∣ P ) a ( P ) + Pr ( ¬ Q ∣ ¬ P ) a ( ¬ P ) . {\displaystyle \Pr(P\mid \lnot Q)={\frac {\Pr(\lnot Q\mid P)\,a(P)}{\Pr(\lnot Q\mid P)\,a(P)+\Pr(\lnot Q\mid \lnot P)\,a(\lnot P)}}.}
In the equations above Pr ( Q ) {\displaystyle \Pr(Q)} denotes the probability of Q {\displaystyle Q} , and a ( P ) {\displaystyle a(P)} denotes the base rate (aka. prior probability ) of P {\displaystyle P} . The conditional probability Pr ( Q ∣ P ) {\displaystyle \Pr(Q\mid P)} generalizes the logical statement P → Q {\displaystyle P\to Q} , i.e. in addition to assigning TRUE or FALSE we can also assign any probability to the statement. Assume that Pr ( Q ) = 1 {\displaystyle \Pr(Q)=1} is equivalent to Q {\displaystyle Q} being TRUE, and that Pr ( Q ) = 0 {\displaystyle \Pr(Q)=0} is equivalent to Q {\displaystyle Q} being FALSE. It is then easy to see that Pr ( P ) = 0 {\displaystyle \Pr(P)=0} when Pr ( Q ∣ P ) = 1 {\displaystyle \Pr(Q\mid P)=1} and Pr ( Q ) = 0 {\displaystyle \Pr(Q)=0} . This is because Pr ( ¬ Q ∣ P ) = 1 − Pr ( Q ∣ P ) = 0 {\displaystyle \Pr(\lnot Q\mid P)=1-\Pr(Q\mid P)=0} so that Pr ( P ∣ ¬ Q ) = 0 {\displaystyle \Pr(P\mid \lnot Q)=0} in the last equation. Therefore, the product terms in the first equation always have a zero factor so that Pr ( P ) = 0 {\displaystyle \Pr(P)=0} which is equivalent to P {\displaystyle P} being FALSE. Hence, the law of total probability combined with Bayes' theorem represents a generalization of modus tollens . [ 6 ]
Modus tollens represents an instance of the abduction operator in subjective logic expressed as:
ω P ‖ ~ Q A = ( ω Q | P A , ω Q | ¬ P A ) ⊚ ~ ( a P , ω Q A ) , {\displaystyle \omega _{P{\tilde {\|}}Q}^{A}=(\omega _{Q|P}^{A},\omega _{Q|\lnot P}^{A}){\widetilde {\circledcirc }}(a_{P},\,\omega _{Q}^{A})\,,}
where ω Q A {\displaystyle \omega _{Q}^{A}} denotes the subjective opinion about Q {\displaystyle Q} , and ( ω Q | P A , ω Q | ¬ P A ) {\displaystyle (\omega _{Q|P}^{A},\omega _{Q|\lnot P}^{A})} denotes a pair of binomial conditional opinions, as expressed by source A {\displaystyle A} . The parameter a P {\displaystyle a_{P}} denotes the base rate (aka. the prior probability ) of P {\displaystyle P} . The abduced marginal opinion on P {\displaystyle P} is denoted ω P ‖ ~ Q A {\displaystyle \omega _{P{\tilde {\|}}Q}^{A}} . The conditional opinion ω Q | P A {\displaystyle \omega _{Q|P}^{A}} generalizes the logical statement P → Q {\displaystyle P\to Q} , i.e. in addition to assigning TRUE or FALSE the source A {\displaystyle A} can assign any subjective opinion to the statement. The case where ω Q A {\displaystyle \omega _{Q}^{A}} is an absolute TRUE opinion is equivalent to source A {\displaystyle A} saying that Q {\displaystyle Q} is TRUE, and the case where ω Q A {\displaystyle \omega _{Q}^{A}} is an absolute FALSE opinion is equivalent to source A {\displaystyle A} saying that Q {\displaystyle Q} is FALSE. The abduction operator ⊚ ~ {\displaystyle {\widetilde {\circledcirc }}} of subjective logic produces an absolute FALSE abduced opinion ω P ‖ ~ Q A {\displaystyle \omega _{P{\widetilde {\|}}Q}^{A}} when the conditional opinion ω Q | P A {\displaystyle \omega _{Q|P}^{A}} is absolute TRUE and the consequent opinion ω Q A {\displaystyle \omega _{Q}^{A}} is absolute FALSE. Hence, subjective logic abduction represents a generalization of both modus tollens and of the Law of total probability combined with Bayes' theorem . [ 7 ] | https://en.wikipedia.org/wiki/Modus_tollens |
Modus vivendi (plural modi vivendi ) is a Latin phrase that means "mode of living" or " way of life ". In international relations , it often is used to mean an arrangement or agreement that allows conflicting parties to coexist in peace. In science, it is used to describe lifestyles . [ 1 ]
Modus means "mode", "way", "method", or "manner". Vivendi means "of living". The phrase is often used to describe informal and temporary arrangements in political affairs. For example, if two sides reach a modus vivendi regarding disputed territories, despite political, historical or cultural incompatibilities, an accommodation of their respective differences is established for the sake of contingency .
In diplomacy , a modus vivendi is an instrument for establishing an international accord of a temporary or provisional nature, intended to be replaced by a more substantial and thorough agreement, such as a treaty . [ 2 ] Armistices and instruments of surrender are intended to achieve a modus vivendi .
The term often refers to Anglo-French relations from the 1815 end of the Napoleonic Wars to the 1904 Entente Cordiale . [ citation needed ]
On 7 January 1948, the United States, Britain and Canada, concluded an agreement known as the modus vivendi , that allowed for limited sharing of technical information on nuclear weapons which officially repealed the Quebec Agreement . [ 3 ] | https://en.wikipedia.org/wiki/Modus_vivendi |
Moeller staining involves the use of a steamed dye reagent in order to increase the stainability of endospores . Carbol fuchsin is the primary stain used in this method. Endospores are stained red, while the counterstain methylene blue stains the vegetative bacteria blue.
Endospores are surrounded by a highly resistant spore coat, which is highly resistant to excessive heat, freezing, desiccation , as well as chemical agents. More importantly, for identification, spores are resistant to commonly employed staining techniques; therefore alternative staining methods are required.
Carbol fuchsin is applied to a heat-fixed slide. The slide is then heated over a bunsen burner, or suspended over a hot water bath, covered with a paper towel, and steamed for 3 minutes. The slide is rinsed with acidified ethanol, and counter-stained with Methylene blue . An improved method involves the addition of the surfactant Tergitol 7 to the carbol fuchsin stain, and the omission of the steaming step. [ 1 ]
This microbiology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Moeller_stain |
In number theory , Moessner's theorem or Moessner's magic [ 1 ] is related to an arithmetical algorithm to produce an infinite sequence of the exponents of positive integers 1 n , 2 n , 3 n , 4 n , ⋯ , {\displaystyle 1^{n},2^{n},3^{n},4^{n},\cdots ~,} with n ≥ 1 , {\displaystyle n\geq 1~,} by recursively manipulating the sequence of integers algebraically. The algorithm was first published by Alfred Moessner [ 2 ] in 1951; the first proof of its validity was given by Oskar Perron [ 3 ] that same year. [ 4 ]
For example, for n = 2 {\displaystyle n=2} , one can remove every even number, resulting in ( 1 , 3 , 5 , 7 ⋯ ) {\displaystyle (1,3,5,7\cdots )} , and then add each odd number to the sum of all previous elements, providing ( 1 , 4 , 9 , 16 , ⋯ ) = ( 1 2 , 2 2 , 3 2 , 4 2 ⋯ ) {\displaystyle (1,4,9,16,\cdots )=(1^{2},2^{2},3^{2},4^{2}\cdots )} .
Write down every positive integer and remove every n {\displaystyle n} -th element, with n {\displaystyle n} a positive integer. Build a new sequence of partial sums with the remaining numbers. Continue by removing every ( n − 1 ) {\displaystyle (n-1)} -st element in the new sequence and producing a new sequence of partial sums. For the sequence k {\displaystyle k} , remove the ( n − k + 1 ) {\displaystyle (n-k+1)} -st elements and produce a new sequence of partial sums.
The procedure stops at the n {\displaystyle n} -th sequence. The remaining sequence will correspond to 1 n , 2 n , 3 n , 4 n ⋯ . {\displaystyle 1^{n},2^{n},3^{n},4^{n}\cdots ~.} [ 4 ] [ 5 ]
The initial sequence is the sequence of positive integers,
For n = 4 {\displaystyle n=4} , we remove every fourth number from the sequence of integers and add up each element to the sum of the previous elements
Now we remove every third element and continue to add up the partial sums
Remove every second element and continue to add up the partial sums
which recovers 1 4 , 2 4 , 3 4 , 4 4 , ⋯ {\displaystyle 1^{4},2^{4},3^{4},4^{4},\cdots } .
If the triangular numbers are removed instead, a similar procedure leads to the sequence of factorials 1 ! , 2 ! , 3 ! , 4 ! , ⋯ . {\displaystyle 1!,2!,3!,4!,\cdots ~.} [ 1 ] | https://en.wikipedia.org/wiki/Moessner's_theorem |
Moexipril was an angiotensin converting enzyme inhibitor (ACE inhibitor) [ 1 ] used for the treatment of hypertension and congestive heart failure . Moexipril can be administered alone or with other antihypertensives or diuretics . [ 2 ]
It works by inhibiting the conversion of angiotensin I to angiotensin II . [ 3 ]
It was patented in 1980 and approved for medical use in 1995. [ 4 ] Moexipril is available from Schwarz Pharma under the trade name Univasc . [ 3 ] [ 5 ]
Moexipril is generally well tolerated in elderly patients with hypertension. [ 6 ] Hypotension, dizziness, increased cough, diarrhea, flu syndrome, fatigue, and flushing have been found to affect less than 6% of patients who were prescribed moexipril. [ 3 ] [ 6 ]
As an ACE inhibitor, moexipril causes a decrease in ACE. This blocks the conversion of angiotensin I to angiotensin II. Blockage of angiotensin II limits hypertension within the vasculature. Additionally, moexipril has been found to possess cardioprotective properties. Rats given moexipril one week prior to induction of myocardial infarction , displayed decreased infarct size. [ 7 ] The cardioprotective effects of ACE inhibitors are mediated through a combination of angiotensin II inhibition and bradykinin proliferation. [ 8 ] [ 9 ] Increased levels of bradykinin stimulate in the production of prostaglandin E 2 [ 10 ] and nitric oxide, [ 9 ] which cause vasodilation and continue to exert antiproliferative effects. [ 8 ] Inhibition of angiotensin II by moexipril decreases remodeling effects on the cardiovascular system. Indirectly, angiotensin II stimulates of the production of endothelin 1 and 3 (ET1, ET3) [ 11 ] and the transforming growth factor beta-1 ( TGF-β1 ), [ 12 ] all of which have tissue proliferative effects that are blocked by the actions of moexipril. The antiproliferative effects of moexipril have also been demonstrated by in vitro studies where moexipril inhibits the estrogen-stimulated growth of neonatal cardiac fibroblasts in rats. [ 9 ] Other ACE inhibitors have also been found to produce these actions, as well.
Moexipril is available as a prodrug moexipril hydrochloride, and is metabolized in the liver to form the pharmacologically active compound moexiprilat. Formation of moexiprilat is caused by hydrolysis of an ethyl ester group. [ 13 ] Moexipril is incompletely absorbed after oral administration, and its bioavailability is low. [ 14 ] The long pharmacokinetic half-life and persistent ACE inhibition of moexipril allows once-daily administration. [ 15 ]
Moexipril is highly lipophilic , [ 2 ] and is in the same hydrophobic range as quinapril , benazepril , and ramipril . [ 15 ] Lipophilic ACE inhibitors are able to penetrate membranes more readily, thus tissue ACE may be a target in addition to plasma ACE. A significant reduction in tissue ACE (lung, myocardium, aorta, and kidney) activity has been shown after moexipril use. [ 8 ]
It has additional PDE4-inhibiting effects. [ 16 ]
The synthesis of the all-important dipeptide-like side chain involves alkylation of the tert -butyl ester of L -alanine ( 2 ) with ethyl 2-bromo-4-phenylbutanoate ( 1 ); the presominane of the desired isomer is attributable to asymmetric induction from the adjacent chiral center. Reaction of the product with hydrogen chloride then cleaves the tert -butyl group to give the half acid ( 3 ). [ 19 ] Coupling of that acid to the secondary amine on tetrahydroisoquinoline ( 4 ) gives the corresponding amine. The tert -butyl ester in this product is again cleaved with hydrogen chloride to afford moexipril ( 5 ). | https://en.wikipedia.org/wiki/Moexipril |
Moffatt eddies are sequences of eddies that develop in corners bounded by plane walls (or sometimes between a wall and a free surface) due to an arbitrary disturbance acting at asymptotically large distances from the corner. Although the source of motion is the arbitrary disturbance at large distances, the eddies develop quite independently and thus solution of these eddies emerges from an eigenvalue problem, a self-similar solution of the second kind.
The eddies are named after Keith Moffatt , who discovered these eddies in 1964, [ 1 ] although some of the results were already obtained by William Reginald Dean and P. E. Montagnon in 1949. [ 2 ] Lord Rayleigh also studied the problem of flow near the corner with homogeneous boundary conditions in 1911. [ 3 ] Moffatt eddies inside cones are solved by P. N. Shankar . [ 4 ]
Near the corner, the flow can be assumed to be Stokes flow . Describing the two-dimensional planar problem by the cylindrical coordinates ( r , θ ) {\displaystyle (r,\theta )} with velocity components ( u r , u θ ) {\displaystyle (u_{r},u_{\theta })} defined by a stream function such that
the governing equation can be shown to be simply the biharmonic equation ∇ 4 ψ = 0 {\displaystyle \nabla ^{4}\psi =0} . The equation has to be solved with homogeneous boundary conditions (conditions taken for two walls separated by angle 2 α {\displaystyle 2\alpha } )
The Taylor scraping flow is similar to this problem but driven inhomogeneous boundary condition. The solution is obtained by the eigenfunction expansion, [ 5 ]
where A n {\displaystyle A_{n}} are constants and the real part of the eigenvalues are always greater than unity. The eigenvalues λ n {\displaystyle \lambda _{n}} will be function of the angle α {\displaystyle \alpha } , but regardless eigenfunctions can be written down for any λ {\displaystyle \lambda } ,
For antisymmetrical solution, the eigenfunction is even and hence B = D = 0 {\displaystyle B=D=0} and the boundary conditions demand sin 2 ( λ − 1 ) α = − ( λ − 1 ) sin 2 α {\displaystyle \sin 2(\lambda -1)\alpha =-(\lambda -1)\sin 2\alpha } . The equations admits no real root when 2 α < 146 {\displaystyle 2\alpha <146} °. These complex eigenvalues indeed correspond to the moffatt eddies. The complex eigenvalue if given by λ n = 1 + ( 2 α ) − 1 ( ξ n + i η n ) {\displaystyle \lambda _{n}=1+(2\alpha )^{-1}(\xi _{n}+i\eta _{n})} where
Here k = sin 2 α / 2 α {\displaystyle k=\sin 2\alpha /2\alpha } . | https://en.wikipedia.org/wiki/Moffatt_eddies |
The spring of Mohai Agnes is in Hungary in Fejér county next to the Bakony hill in Moha village.
Moha is located on the central part of the Fejér county , to the north from Székesfehérvár . The Agnes-spring can be found in the north gate of the 2–4 km long ditch of between the Bakony and the Vértes . The most part of the region is well known of the high ground water level and the known and hiding springs.
The first records were made in 1374. Between the people was known as ”Aldokut” (Blessing well) nowadays it is called as Agnes-spring.
The first chemical analysis of the water was made in 1810 that has known so far. After that in 1835 in Leipzig and in 1876 in Stuttgart was written again from the Mohai Agnes. In 1880 the Hungarian Academy of Sciences supported the knowing of the water.
According to the chemical analysis the main components are the following: the calcium carbonate , magnesium , sodium , potassium , lithium , iron oxide , calcium sulfate , silicic acid and titanic acid . This latter one - the analysis emphasizes - is especial, since it is very rare. Similar ones are noted about a Norway mineral water only. Regarding to the curing effects the Mohai Agnes is good for respiratory and digestive diseases. On 11.2 °C enriched with natural carbon dioxide has been known since the 14th century. The Mohai Agnes mineral water contains various mineral each of which is present in natural, ionic form, but there are several other characteristics that make this high-quality drink stand out from other mineral water distributed in Hungary.
Thanks to the hard work of Amade Tadde, the Bajzáth family and Imre Kempelen, the exploration of the spring and starting of bottling, the Mohai Agnes mineral water is popular throughout the country and well known in Europe wide. The distribution of the mineral water started in the 19th century and was growing continuously until the World War I .
After the political transformation and the privatization the Elore Mezogazdasagi Termelo es Szolgaltato Szovetkezet (Forward Agricultural Producer and Service provider Co-operative) bottled the Mohai Agnes. In 1999 the Karsai Holding Plc. obtained the majority property and in August 2000 the Mohai Agnes Ltd. was launched by Karsai.
In 2005 the Mohai Agnes was sold to the Uniresal Food Ltd. and the mineral water appeared on the shelves of the CBA branches. After the remarkable investments and modernizations in 2008 the liquidation began.
1993 | https://en.wikipedia.org/wiki/Mohai_Agnes_mineral_water |
Mohamed M. Atalla ( Arabic : محمد عطاالله ; August 4, 1924 – December 30, 2009) was an Egyptian-American engineer, physicist , cryptographer , inventor and entrepreneur. He was a semiconductor pioneer who made important contributions to modern electronics . He is best known for inventing, along with his colleague Dawon Kahng , the MOSFET (metal–oxide–semiconductor field-effect transistor, or MOS transistor) in 1959, which along with Atalla's earlier surface passivation processes, had a significant impact on the development of the electronics industry . He is also known as the founder of the data security company Atalla Corporation (now Utimaco Atalla ), founded in 1972. He received the Stuart Ballantine Medal (now the Benjamin Franklin Medal in physics) and was inducted into the National Inventors Hall of Fame for his important contributions to semiconductor technology as well as data security.
Born in Port Said , Egypt, he was educated at Cairo University in Egypt and then Purdue University in the United States, before joining Bell Labs in 1949 and later adopting the more anglicized " John " or " Martin " M. Atalla as professional names. He made several important contributions to semiconductor technology at Bell Labs, including his development of the surface passivation process and his demonstration of the MOSFET with Kahng in 1959.
His work on MOSFET was initially overlooked at Bell, which led to his resignation from Bell and joining Hewlett-Packard (HP), founding its Semiconductor Lab in 1962 and then HP Labs in 1966, before leaving to join Fairchild Semiconductor , founding its Microwave & Optoelectronics division in 1969. His work at HP and Fairchild included research on Schottky diode , gallium arsenide (GaAs), gallium arsenide phosphide (GaAsP), indium arsenide (InAs) and light-emitting diode (LED) technologies. He later left the semiconductor industry , and became an entrepreneur in cryptography and data security . In 1972, he founded Atalla Corporation, and filed a patent for a remote Personal Identification Number (PIN) security system. In 1973, he released the first hardware security module , the "Atalla Box", which encrypted PIN and ATM messages, and went on to secure the majority of the world's ATM transactions. He later founded the Internet security company TriStrata Security in the 1990s. He died in Atherton , California , on December 30, 2009.
Mohamed Mohamed Atalla [ 2 ] [ 3 ] [ 4 ] was born in Port Said , Kingdom of Egypt . [ 5 ] He studied at Cairo University in Egypt, where he received his Bachelor of Science degree. He later moved to the United States to study mechanical engineering at Purdue University . There, he received his master's degree ( MSc ) in 1947 and his doctorate ( PhD ) in 1949, both in mechanical engineering . [ 5 ] His MSc thesis was titled "High Speed Flow in Square Diffusers" [ 6 ] [ full citation needed ] and his PhD thesis was titled "High Speed Compressible Flow in Square Diffusers". [ 3 ]
After completing his PhD at Purdue University , Atalla was employed at Bell Telephone Laboratories (BTL) in 1949. [ 7 ] In 1950, he began working at Bell's New York City operations, where he worked on problems related to the reliability of electromechanical relays , [ 8 ] and worked on circuit-switched telephone networks . [ 9 ] With the emergence of transistors , Atalla was moved to the Murray Hill lab, where he began leading a small transistor research team in 1956. [ 8 ] Despite coming from a mechanical engineering background and having no formal education in physical chemistry , he proved himself to be a quick learner in physical chemistry and semiconductor physics , eventually demonstrating a high level of skill in these fields. [ 10 ] He researched, among other things, the surface properties of silicon semiconductors and the use of silica as a protective layer of silicon semiconductor devices . [ 7 ] He eventually adopted the alias pseudonyms "Martin" M. Atalla or "John" M. Atalla for his professional career. [ 4 ]
Between 1956 and 1960, Atalla led a small team of several BTL researchers, including Eileen Tannenbaum, Edwin Joseph Scheibner and Dawon Kahng . [ 11 ] They were new recruits at BTL, like himself, with no senior researchers on the team. Their work was initially not taken seriously by senior management at BTL and its owner AT&T , due to the team consisting of new recruits, and due to the team leader Atalla himself coming from a mechanical engineering background, in contrast to the physicists , physical chemists and mathematicians who were taken more seriously, despite Atalla demonstrating advanced skills in physical chemistry and semiconductor physics. [ 10 ]
Despite working mostly on their own, [ 10 ] Atalla and his team made significant advances in semiconductor technology. [ 11 ] According to Fairchild Semiconductor engineer Chih-Tang Sah , the work of Atalla and his team during 1956–1960 was "the most important and significant technology advance" in silicon semiconductor technology. [ 11 ]
An initial focus of Atalla's research was to solve the problem of silicon surface states . At the time, the electrical conductivity of semiconductor materials such as germanium and silicon were limited by unstable quantum surface states, [ 12 ] where electrons are trapped at the surface, due to dangling bonds that occur because unsaturated bonds are present at the surface. [ 13 ] This prevented electricity from reliably penetrating the surface to reach the semiconducting silicon layer. [ 7 ] [ 14 ] Due to the surface state problem, germanium was the dominant semiconductor material of choice for transistors and other semiconductor devices in the early semiconductor industry , as germanium was capable of higher carrier mobility . [ 15 ] [ 16 ]
He made a breakthrough with his development of the surface passivation process. [ 7 ] This is the process by which a semiconductor surface is rendered inert , and does not change semiconductor properties as a result of interaction with air or other materials in contact with the surface or edge of the crystal . The surface passivation process was first developed by Atalla in the late 1950s. [ 7 ] [ 17 ] He discovered that the formation of a thermally grown silicon dioxide (SiO 2 ) layer greatly reduced the concentration of electronic states at the silicon surface , [ 17 ] and discovered the important quality of SiO 2 films to preserve the electrical characteristics of p–n junctions and prevent these electrical characteristics from deteriorating by the gaseous ambient environment. [ 18 ] He found that silicon oxide layers could be used to electrically stabilize silicon surfaces. [ 19 ] He developed the surface passivation process, a new method of semiconductor device fabrication that involves coating a silicon wafer with an insulating layer of silicon oxide so that electricity could reliably penetrate to the conducting silicon below. By growing a layer of silicon dioxide on top of a silicon wafer, Atalla was able to overcome the surface states that prevented electricity from reaching the semiconducting layer. His surface passivation method was a critical step that made possible the ubiquity of silicon integrated circuits , and later became critical to the semiconductor industry. [ 7 ] [ 14 ] For the surface passivation process, he developed the method of thermal oxidation , which was a breakthrough in silicon semiconductor technology. [ 20 ]
Atalla first published his findings in BTL memos during 1957, before presenting his work at an Electrochemical Society meeting in 1958, [ 21 ] [ 22 ] the Radio Engineers' Semiconductor Device Research Conference. [ 8 ] The semiconductor industry saw the potential significance of Atalla's surface oxidation method, with RCA calling it a "milestone in the surface field." [ 8 ] The same year, he made further refinements to the process with his colleagues Eileen Tannenbaum and Edwin Joseph Scheibner, before they published their results in May 1959. [ 23 ] [ 24 ] According to Fairchild Semiconductor engineer Chih-Tang Sah , the surface passivation process developed by Atalla and his team "blazed the trail" that led to the development of the silicon integrated circuit. [ 25 ] [ 23 ] Atalla's silicon transistor passivation technique by thermal oxide [ 26 ] was the basis for several important inventions in 1959: the MOSFET (MOS transistor) by Atalla and Dawon Kahng at Bell Labs, the planar process by Jean Hoerni at Fairchild Semiconductor . [ 22 ] [ 25 ] [ 27 ]
Building on his earlier pioneering research [ 28 ] on the surface passivation and thermal oxidation processes, [ 20 ] Atalla developed the metal–oxide–semiconductor (MOS) process. [ 7 ] Atalla then proposed that a field effect transistor –a concept first envisioned in the 1920s and confirmed experimentally in the 1940s, but not achieved as a practical device—be built of metal-oxide-silicon. Atalla assigned the task of assisting him to Dawon Kahng , a Korean scientist who had recently joined his group. [ 7 ] That led to the invention of the MOSFET (metal–oxide–semiconductor field-effect transistor) by Atalla and Kahng, [ 29 ] [ 30 ] in November 1959. [ 8 ] Atalla and Kahng first demonstrated the MOSFET in early 1960. [ 31 ] [ 32 ] With its high scalability , [ 33 ] and much lower power consumption and higher density than bipolar junction transistors , [ 34 ] the MOSFET made it possible to build high-density integrated circuit (IC) chips. [ 35 ]
In 1960, Atalla and Kahng fabricated the first MOSFET with a gate oxide thickness of 100 nm , along with a gate length of 20 μm . [ 36 ] In 1962, Atalla and Kahng fabricated a nanolayer -base metal–semiconductor junction (M–S junction) transistor. This device has a metallic layer with nanometric thickness sandwiched between two semiconducting layers, with the metal forming the base and the semiconductors forming the emitter and collector. With its low resistance and short transit times in the thin metallic nanolayer base, the device was capable of high operation frequency compared to bipolar transistors . Their pioneering work involved depositing metal layers (the base) on top of single crystal semiconductor substrates (the collector), with the emitter being a crystalline semiconductor piece with a top or a blunt corner pressed against the metallic layer (the point contact). They deposited gold (Au) thin films with a thickness of 10 nm on n-type germanium (n-Ge), while the point contact was n-type silicon (n-Si). [ 37 ] Atalla resigned from BTL in 1962. [ 30 ]
Extending their work on MOS technology, Atalla and Kahng next did pioneering work on hot carrier devices, which used what would later be called a Schottky barrier . [ 38 ] The Schottky diode , also known as the Schottky-barrier diode, was theorized for years, but was first practically realized as a result of the work of Atalla and Kahng during 1960–1961. [ 39 ] They published their results in 1962 and called their device the "hot electron" triode structure with semiconductor-metal emitter. [ 40 ] It was one of the first metal -base transistors. [ 41 ] The Schottky diode went on to assume a prominent role in mixer applications. [ 39 ]
In 1962, Atalla joined Hewlett-Packard , where he co-founded Hewlett-Packard and Associates (HP Associates), which provided Hewlett-Packard with fundamental solid-state capabilities. [ 5 ] He was the Director of Semiconductor Research at HP Associates, [ 30 ] and the first manager of HP's Semiconductor Lab. [ 42 ]
He continued research on Schottky diodes , while working with Robert J. Archer, at HP Associates. They developed high vacuum metal film deposition technology, [ 43 ] and fabricated stable evaporated / sputtered contacts , [ 44 ] [ 45 ] publishing their results in January 1963. [ 46 ] Their work was a breakthrough in metal–semiconductor junction [ 44 ] and Schottky barrier research, as it overcame most of the fabrication problems inherent in point-contact diodes and made it possible to build practical Schottky diodes. [ 43 ]
At the Semiconductor Lab during the 1960s, he launched a material science investigation program that provided a base technology for gallium arsenide (GaAs), gallium arsenide phosphide (GaAsP) and indium arsenide (InAs) devices. These devices became the core technology used by HP's Microwave Division to develop sweepers and network analyzers that pushed 20–40 GHz frequency, giving HP more than 90% of the military communications market. [ 42 ]
Atalla helped create HP Labs in 1966. He directed its solid-state division. [ 5 ]
In 1969, he left HP and joined Fairchild Semiconductor . [ 38 ] He was the vice president and general manager of the Microwave & Optoelectronics division, [ 47 ] from its inception in May 1969 up until November 1971. [ 48 ] He continued his work on light-emitting diodes (LEDs), proposing they could be used for indicator lights and optical readers in 1971. [ 49 ] He later left Fairchild in 1972. [ 38 ]
He left the semiconductor industry in 1972, and began a new career as an entrepreneur in data security [ 38 ] and cryptography . [ 50 ] In 1972, [ 50 ] he founded Atalla Technovation, [ 51 ] later called Atalla Corporation , which dealt with safety problems of banking and financial institutions . [ 52 ]
He invented the first hardware security module (HSM), [ 53 ] the so-called " Atalla Box ", a security system that secures a majority of transactions from ATMs today. At the same time, Atalla contributed to the development of the personal identification number (PIN) system, which has developed among others in the banking industry as the standard for identification.
The work of Atalla in the early 1970s led to the use of hardware security modules . His "Atalla Box", a security system which encrypts PIN and ATM messages, and protected offline devices with an un-guessable PIN-generating key. [ 54 ] He commercially released the "Atalla Box" in 1973. [ 54 ] The product was released as the Identikey. It was a card reader and customer identification system , providing a terminal with plastic card and PIN capabilities. The system was designed to let banks and thrift institutions switch to a plastic card environment from a passbook program. The Identikey system consisted of a card reader console, two customer PIN pads , intelligent controller and built-in electronic interface package. [ 55 ] The device consisted of two keypads, one for the customer and one for the teller. It allowed the customer to type in a secret code, which is transformed by the device, using a microprocessor , into another code for the teller. [ 56 ] During a transaction , the customer's account number was read by the card reader . This process replaced manual entry and avoided possible key stroke errors. It allowed users to replace traditional customer verification methods such as signature verification and test questions with a secure PIN system. [ 55 ]
A key innovation of the Atalla Box was the key block , which is required to securely interchange symmetric keys or PINs with other actors of the banking industry. This secure interchange is performed using the Atalla Key Block (AKB) format, which lies at the root of all cryptographic block formats used within the Payment Card Industry Data Security Standard (PCI DSS) and American National Standards Institute (ANSI) standards. [ 57 ]
Fearful that Atalla would dominate the market, banks and credit card companies began working on an international standard. [ 54 ] Its PIN verification process was similar to the later IBM 3624 . [ 58 ] Atalla was an early competitor to IBM in the banking market, and was cited as an influence by IBM employees who worked on the Data Encryption Standard (DES). [ 51 ] In recognition of his work on the PIN system of information security management , Atalla has been referred to as the "Father of the PIN" [ 5 ] [ 59 ] [ 60 ] and as a father of information security technology. [ 61 ]
The Atalla Box protected over 90% of all ATM networks in operation as of 1998, [ 62 ] and secured 85% of all ATM transactions worldwide as of 2006. [ 63 ] Atalla products still secure the majority of the world's ATM transactions, as of 2014. [ 53 ]
In 1972, Atalla filed U.S. patent 3,938,091 for a remote PIN verification system, which utilized encryption techniques to assure telephone link security while entering personal ID information, which would be transmitted as encrypted data over telecommunications networks to a remote location for verification. This was a precursor to telephone banking , Internet security and e-commerce . [ 51 ]
At the National Association of Mutual Savings Banks (NAMSB) conference in January 1976, Atalla announced an upgrade to its Identikey system, called the Interchange Identikey. It added the capabilities of processing online transactions and dealing with network security . Designed with the focus of taking bank transactions online , the Identikey system was extended to shared-facility operations. It was consistent and compatible with various switching networks , and was capable of resetting itself electronically to any one of 64,000 irreversible nonlinear algorithms as directed by card data information. The Interchange Identikey device was released in March 1976. It was one of the first products designed to deal with online transactions, along with Bunker Ramo Corporation products unveiled at the same NAMSB conference. [ 56 ] In 1979, Atalla introduced the first network security processor (NSP). [ 64 ]
In 1987, Atalla Corporation merged with Tandem Computers . Atalla went into retirement in 1990.
As of 2013, 250 million card transactions are protected by Atalla products every day. [ 50 ]
It was not long until several executives of large banks persuaded him to develop security systems for the Internet to work. They were worried about the fact that no useful framework for electronic commerce would have been possible at that time without innovation in the computer and network security industry. [ 5 ] Following a request from former Wells Fargo Bank president William Zuendt in 1993, Atalla began developing a new Internet security technology, allowing companies to scramble and transmit secure computer files, e-mail , and digital video and audio , over the internet. [ 59 ]
As a result of these activities, he founded the company TriStrata Security in 1996. [ 65 ] In contrast to most conventional computer security systems at the time, which built walls around a company's entire computer network to protect the information within from thieves or corporate spies, TriStrata took a different approach. Its security system wrapped a secure, encrypted envelope around individual pieces of information (such as a word processing file, a customer database , or e-mail) that can only be opened and deciphered with an electronic permit, allowing companies to control which users have access to this information and the necessary permits. [ 59 ] It was considered a new approach to enterprise security at the time. [ 5 ]
Atalla was the chairman of A4 System, as of 2003. [ 5 ]
He lived in Atherton , California . Atalla died on December 30, 2009, in Atherton. [ 66 ]
Atalla was awarded the Stuart Ballantine Medal (now the Benjamin Franklin Medal in physics) at the 1975 Franklin Institute Awards , for his important contributions to silicon semiconductor technology and his invention of the MOSFET. [ 67 ] [ 68 ] In 2003, Atalla received a Distinguished Alumnus doctorate from Purdue University . [ 5 ]
In 2009, he was inducted into the National Inventors Hall of Fame for his important contributions to semiconductor technology as well as data security. [ 7 ] He was referred to as one of the "Sultans of Silicon" along with several other semiconductor pioneers. [ 32 ]
In 2014, the 1959 invention of the MOSFET was included on the list of IEEE milestones in electronics. [ 69 ] In 2015, Atalla was inducted into the IT History Society 's IT Honor Roll for his important contributions to information technology . [ 70 ] | https://en.wikipedia.org/wiki/Mohamed_M._Atalla |
Mohamed Osman Baloola ( Arabic : (محمد عثمان بلولة) born April 14, 1981) is a Sudanese scientist and inventor who was named among The World's 500 Most influential Arabs in 2012 and 2013 [ 1 ] [ 2 ] [ 3 ] for his work on diabetes . Baloola has been a teaching assistant of biomedical engineering at the Ajman University of Science and Technology since 2010. He won a science and innovation award at the Arabian Business Awards 2011, in the Amrani hotel at Burj Khalifa in Dubai. [ 4 ] He won Dh40,000 (11,000 US) during a Sharjah television competition for his invention of a remote monitoring and control system for diabetes patients via mobile phone. [ 5 ]
Baloola was born on April 14, 1981, in Abu Dhabi , United Arab Emirates.
Baloola received a Bachelor of Science in biomedical engineering from Ajman University of Science and Technology in September 2009. Then he joined Ajman University as a teaching assistant in the Faculty of Engineering. He won many awards during his studies and after graduating. [ 6 ]
Mohamed researches diabetes due to a family history of suffering from the disease. [ 12 ] His father, mother and brother are diabetics and his concern for the growing number of diabetics worldwide prompted his invention. He developed a remote monitoring and control system for diabetes symptoms. [ 5 ] He set about creating an artificial pancreas [ clarification needed ] and a remote system to monitor the stability of glucose levels in diabetics. The device, which can be linked to a hospital database system as well as family and friends, enables an immediate response if a medical situation arises. [ 13 ] | https://en.wikipedia.org/wiki/Mohamed_Osman_Baloola |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.