text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
A mating system is a way in which a group is structured in relation to sexual behaviour. The precise meaning depends upon the context. With respect to animals , the term describes which males and females mate under which circumstances. Recognised systems include monogamy , polygamy (which includes polygyny , polyandry , and polygynandry ), and promiscuity , all of which lead to different mate choice outcomes and thus these systems affect how sexual selection works in the species which practice them. In plants, the term refers to the degree and circumstances of outcrossing . In human sociobiology , the terms have been extended to encompass the formation of relationships such as marriage .
The primary mating systems in plants are outcrossing (cross-fertilisation), autogamy (self-fertilisation) and apomixis (asexual reproduction without fertilization, but only when arising by modification of sexual function). Mixed mating systems , in which plants use two or even all three mating systems, are not uncommon. [ 1 ]
A number of models have been used to describe the parameters of plant mating systems. The basic model is the mixed mating model , which is based on the assumption that every fertilisation is either self-fertilisation or completely random cross-fertilisation. More complex models relax this assumption; for example, the effective selfing model recognises that mating may be more common between pairs of closely related plants than between pairs of distantly related plants. [ 1 ]
The following are some of the mating systems generally recognized in animals:
These mating relationships may or may not be associated with social relationships, in which the sexual partners stay together to become parenting partners. As the alternative term "pair bonding" implies, this is usual in monogamy. In many polyandrous systems, the males and the female stay together to rear the young. In polygynous systems where the number of females paired with each male is low and the male will often stay with one female to help rear the young, while the other females rear their young on their own. In polygynandry, each of the males may assist one female; if all adults help rear all the young, the system is more usually called " communal breeding ". In highly polygynous systems, and in promiscuous systems, paternal care of young is rare, or there may be no parental care at all.
These descriptions are idealized, and the social partnerships are often easier to observe than the mating relationships. In particular:
Sexual conflict occurs between individuals of different sexes that have separate or conflicting requirements for optimal mating success. This conflict may lead to competitive adaptations and co-adaptations of one or both of the sexes to maintain mating processes that are beneficial to that sex. [ 8 ] [ 9 ] Intralocus sexual conflict and interlocus sexual conflict describe the genetic influence behind sexual conflict, and are presently recognized as the most basic forms of sexual conflict. [ 9 ]
Compared to other vertebrates , where a species usually has a single mating system, humans display great variety. Humans also differ by having formal marriages , which in some cultures involve negotiation and arrangement between elder relatives. Regarding sexual dimorphism (see the section about animals above), humans are in the intermediate group with moderate sex differences in body size but with relatively small testes, [ 10 ] indicating relatively low sperm competition in socially monogamous and polygynous human societies. One estimate is that 83% of human societies are polygynous, 0.05% are polyandrous, and the rest are monogamous. Even the last group may at least in part be genetically polygynous. [ 11 ]
From an evolutionary standpoint, females are more prone to practice monogamy because their reproductive success is based on the resources they are able to acquire through reproduction rather than the quantity of offspring they produce. However, males are more likely to practice polygamy because their reproductive success is based on the amount of offspring they produce, rather than any kind of benefit from parental investment. [ 12 ]
Polygyny is associated with an increased sharing of subsistence provided by women. This is consistent with the theory that if women raise the children alone, men can concentrate on the mating effort. Polygyny is also associated with greater environmental variability in the form of variability of rainfall . This may increase the differences in the resources available to men. An important association is that polygyny is associated with a higher pathogen load in an area which may make having good genes in a male increasingly important. A high pathogen load also decreases the relative importance of sororal polygyny which may be because it becomes increasingly important to have genetic variability in the offspring (See Major histocompatibility complex and sexual selection ). [ 11 ]
Virtually all the terms used to describe animal mating systems were adopted from social anthropology , where they had been devised to describe systems of marriage . This shows that human sexual behavior is unusually flexible since, in most animal species, one mating system dominates. While there are close analogies between animal mating systems and human marriage institutions, these analogies should not be pressed too far, because in human societies, marriages typically have to be recognized by the entire social group in some way, and there is no equivalent process in animal societies. The temptation to draw conclusions about what is "natural" for human sexual behavior from observations of animal mating systems should be resisted: a socio-biologist observing the kinds of behavior shown by humans in any other species would conclude that all known mating systems were natural for that species, depending on the circumstances or on individual differences. [ 12 ]
As culture increasingly affects human mating choices, ascertaining what is the 'natural' mating system of the human animal from a zoological perspective becomes increasingly difficult. Some clues can be taken from human anatomy, which is essentially unchanged from the prehistoric past:
Some have suggested that these anatomical factors signify some degree of sperm competition , although others have provided anatomical evidence to suggest that sperm competition risk in humans is low; [ 10 ] [ 13 ]
Monogamy has evolved multiple times in animals, with homologous brain structures predicting the mating and parental strategies used by them. These homologous structures were brought about by similar mechanisms. Even though there have been many different evolutionary pathways to get to monogamy, all the studied organisms express their genes very similarly in the fore and midbrain, implying a universal mechanism for the evolution of monogamy in vertebrates. [ 17 ] While genetics is not the exclusive cause of mating systems within animals, it is influential in many animals, particularly rodents , which have been the most heavily researched. Certain rodents’ mating systems—monogamous, polygynous, or socially monogamous with frequent promiscuity—are correlated with suggested evolutionary phylogenies , where rodents more closely related genetically are more likely to use a similar mating system, suggesting an evolutionary basis. These differences in mating strategy can be traced back to a few significant alleles that affect behaviors that are heavily influential on mating system, such as the alleles responsible for the level of parental care, how animals choose their partner(s), and sexual competitiveness, among others, which are all at least partially influenced by genetics. [ 18 ] While these genes may not perfectly correlate with the mating system that animals use, genetics is one factor that may lead to a species or population reproducing using one mating system over another, or even potentially multiple at different locations or points in time.
Mating systems can also have large impacts on the genetics of a population, strongly affecting natural selection and speciation. In plover populations, polygamous species tend to speciate more slowly than monogamous species do. This is likely because polygamous animals tend to move larger distances to find mates, contributing to a high level of gene flow , which can genetically homogenize many nearby subpopulations. Monogamous animals, on the other hand, tend to stay closer to their starting location, not dispersing as much. [ 19 ] Because monogamous animals don’t migrate as far, monogamous populations which are geographically closer together tend to reproductively isolate from each other more easily, and thus each subpopulation is more likely to diversify or speciate from the other nearby populations as compared to polygamous populations. In polygamous species, however, the male partner in polygynous species and female partner in polyandrous species often tend to spread further to look for mates, potentially to find more or better mates. The increased level of movement among populations leads to increased gene flow between populations, effectively making geographically distinct populations into genetically similar ones via interbreeding. [ 20 ] This has been observed in some species of rodents, where generally promiscuous species were quickly differentiated into monogamous and polygamous taxa by a prominent introduction of monogamous behaviors in some populations of that species, showing the swift evolutionary effects different mating systems can have. Specifically, monogamous populations speciated up to 4.8 times faster and had lower extinction rates than non monogamous populations. [ 18 ] Another way that monogamy has the potential to cause increased speciation is because individuals are more selective with partners and competition, causing different nearby populations of the same species to stop interbreeding as much, leading to speciation down the road. [ 20 ]
Another potential effect of polyandry in particular is increasing the quality of offspring and reducing the probability of reproductive failure. [ 21 ] There are many possible reasons for this, one of the possibilities being that there is greater genetic variation in families because most offspring in a family will have either a different mother or father. [ 22 ] This reduces the potential harm done by inbreeding, as siblings will be less closely related and more genetically diverse. Additionally, because of the increased genetic diversity among generations, the levels of reproductive fitness are also more variable, and so it is easier to select for positive traits more quickly, as the difference in fitness between members of the same generation would be greater. When many males are actively mating, polyandry can decrease the risk of extinction as well, as it can increase the effective population size . Increased effective population sizes are more stable and less prone to accumulating deleterious mutations due to genetic drift. [ 22 ]
Mating in bacteria involves transfer of DNA from one cell to another and incorporation of the transferred DNA into the recipient bacteria's genome by homologous recombination . Transfer of DNA between bacterial cells can occur in three main ways. First, a bacterium can take up exogenous DNA released into the intervening medium from another bacterium by a process called transformation . DNA can also be transferred from one bacterium to another by the process of transduction , which is mediated by an infecting virus (bacteriophage). The third method of DNA transfer is conjugation , in which a plasmid mediates transfer through direct cell contact between cells.
Transformation, unlike transduction or conjugation, depends on numerous bacterial gene products that specifically interact to perform this complex process, [ 23 ] and thus transformation is clearly a bacterial adaptation for DNA transfer. In order for a bacterium to bind, take up and recombine donor DNA into its own chromosome, it must first enter a special physiological state termed natural competence . In Bacillus subtilis about 40 genes are required for the development of competence and DNA uptake. [ 24 ] The length of DNA transferred during B. subtilis transformation can be as much as a third and up to the whole chromosome. [ 25 ] [ 26 ] Transformation appears to be common among bacterial species, and at least 60 species are known to have the natural ability to become competent for transformation. [ 27 ] The development of competence in nature is usually associated with stressful environmental conditions, and seems to be an adaptation for facilitating repair of DNA damage in recipient cells. [ 28 ]
In several species of archaea , mating is mediated by formation of cellular aggregates. Halobacterium volcanii , an extreme halophilic archaeon, forms cytoplasmic bridges between cells that appear to be used for transfer of DNA from one cell to another in either direction. [ 29 ]
When the hyperthermophilic archaea Sulfolobus solfataricus [ 30 ] and Sulfolobus acidocaldarius [ 31 ] are exposed to the DNA damaging agents UV irradiation, bleomycin or mitomycin C , species-specific cellular aggregation is induced. Aggregation in S. solfataricus could not be induced by other physical stressors, such as pH or temperature shift, [ 30 ] suggesting that aggregation is induced specifically by DNA damage. Ajon et al. [ 31 ] showed that UV-induced cellular aggregation mediates chromosomal marker exchange with high frequency in S. acidocaldarius . Recombination rates exceeded those of uninduced cultures by up to three orders of magnitude. Frols et al. [ 30 ] and Ajon et al. [ 31 ] hypothesized that cellular aggregation enhances species-specific DNA transfer between Sulfolobus cells in order to provide increased repair of damaged DNA by means of homologous recombination . This response appears to be a primitive form of sexual interaction similar to the more well-studied bacterial transformation systems that are also associated with species specific DNA transfer between cells leading to homologous recombinational repair of DNA damage. [ citation needed ]
Protists are a large group of diverse eukaryotic microorganisms , mainly unicellular animals and plants, that do not form tissues . Eukaryotes emerged in evolution more than 1.5 billion years ago. [ 32 ] The earliest eukaryotes were likely protists. Mating and sexual reproduction are widespread among extant eukaryotes. Based on a phylogenetic analysis, Dacks and Roger [ 33 ] proposed that facultative sex was present in the common ancestor of all eukaryotes.
However, to many biologists it seemed unlikely until recently, that mating and sex could be a primordial and fundamental characteristic of eukaryotes. A principal reason for this view was that mating and sex appeared to be lacking in certain pathogenic protists whose ancestors branched off early from the eukaryotic family tree. However, several of these protists are now known to be capable of, or to recently have had, the capability for meiosis and hence mating. To cite one example, the common intestinal parasite Giardia intestinalis was once considered to be a descendant of a protist lineage that predated the emergence of meiosis and sex. However, G. intestinalis was recently found to have a core set of genes that function in meiosis and that are widely present among sexual eukaryotes. [ 34 ] These results suggested that G. intestinalis is capable of meiosis and thus mating and sexual reproduction. Furthermore, direct evidence for meiotic recombination, indicative of mating and sexual reproduction, was also found in G. intestinalis . [ 35 ] Other protists for which evidence of mating and sexual reproduction has recently been described are parasitic protozoa of the genus Leishmania , [ 36 ] Trichomonas vaginalis , [ 37 ] and acanthamoeba . [ 38 ]
Protists generally reproduce asexually under favorable environmental conditions, but tend to reproduce sexually under stressful conditions, such as starvation or heat shock. [ citation needed ]
Both animal viruses and bacterial viruses ( bacteriophage ) are able to undergo mating. When a cell is mixedly infected by two genetically marked viruses, recombinant virus progeny are often observed indicating that mating interaction had occurred at the DNA level. Another manifestation of mating between viral genomes is multiplicity reactivation (MR). MR is the process by which at least two virus genomes, each containing inactivating genome damage, interact with each other in an infected cell to form viable progeny viruses. The genes required for MR in bacteriophage T4 are largely the same as the genes required for allelic recombination. [ 39 ] Examples of MR in animal viruses are described in the articles Herpes simplex virus , Influenza A virus , Adenoviridae , Simian virus 40 , Vaccinia virus , and Reoviridae . | https://en.wikipedia.org/wiki/Mating_system |
Mating types are the microorganism equivalent to sexes in multicellular lifeforms and are thought to be the ancestor to distinct sexes . They also occur in multicellular organisms such as fungi.
Mating types are the microorganism equivalent to sex in higher organisms [ 1 ] and occur in isogamous species. [ 2 ] Depending on the group, different mating types are often referred to by numbers, letters, or simply "+" and "−" instead of " male " and " female ", which refer to " sexes " or differences in size between gametes . [ 1 ] Syngamy can only take place between gametes carrying different mating types.
Mating types are extensively studied in fungi. Among fungi, mating type is determined by chromosomal regions called mating-type loci . Furthermore, it is not as simple as "two different mating types can mate", but rather, a matter of combinatorics. As a simple example, most basidiomycete have a "tetrapolar heterothallism " mating system: there are two loci, and mating between two individuals is possible if the alleles on both loci are different. For example, if there are 3 alleles per locus, then there would be 9 mating types, each of which can mate with 4 other mating types. [ 3 ] By multiplicative combination, it generates a vast number of mating types.
As an illustration, the model organism Coprinus cinereus has two mating-type loci called A and B . Both loci have 3 groups of genes. At the A locus are 6 homeodomain proteins arranged in 3 groups of 2 (HD1 and HD2), which arose by gene duplication. At the B locus, each of the 3 groups contain one pheromone G-protein-coupled receptor and usually two genes for pheromones.
The A locus ensures heterothallism through a specific interaction between HD1 and HD2 proteins. Within each group, a HD1 protein can only form a functional heterodimer with a HD2 protein from a different group, not with the HD2 protein from its own group. Functional heterodimers are necessary for a dikaryon -specific transcription factor , and its lack arrests the development process. They function redundantly, so it is only necessary for one of the three groups to be heterozygotic for the A locus to work. [ 4 ]
Similarly, the B locus ensures heterothallism through a specific interaction between pheromone receptors and pheromones. Each pheromone receptor is activated by pheromones from other groups, but not by the pheromone encoded by the same group. This means that a pheromone receptor can only trigger a signaling cascade when it binds to a pheromone from a different group, not when it binds to the pheromone from its own group. They also function redundantly. [ 4 ]
In both cases, the mechanism is based on a "self-incompatibility" principle, where the proteins or pheromones from the same group are incompatible with each other, but compatible with those from different groups. [ 5 ] [ 6 ]
Similarly, the Schizophyllum commune has 2 gene groups (Aα, Aβ) for homeodomain proteins on the A locus, and 2 gene groups (Bα, Bβ) for pheromones and receptors on the B locus. Aα has 9 alleles, Aβ has 32, Bα has 9, and Bβ has 9. The two gene groups at the A locus function independently but redundantly, so only one group out of the two needs to be heterozygotic for it to work. Similarly for the two gene groups at the B locus. Thus, mating between two individuals succeeds if
[ ( A α 1 ≠ A α 2 ) O R ( A β 1 ≠ A β 2 ) ] A N D [ ( B α 1 ≠ B α 2 ) O R ( B β 1 ≠ B β 2 ) ] {\displaystyle [(A\alpha 1\neq A\alpha 2)\mathrm {OR} (A\beta 1\neq A\beta 2)]\mathrm {AND} [(B\alpha 1\neq B\alpha 2)\mathrm {OR} (B\beta 1\neq B\beta 2)]}
Thus there are 9 × 32 × 9 × 9 = 23328 {\displaystyle 9\times 32\times 9\times 9=23328} mating types, each of which can mate with ( 9 × 32 − 1 ) × ( 9 × 9 − 1 ) = 22960 {\displaystyle (9\times 32-1)\times (9\times 9-1)=22960} other mating types. [ 7 ]
Reproduction by mating types is especially prevalent in fungi . Filamentous ascomycetes usually have two mating types referred to as "MAT1-1" and "MAT1-2", following the yeast mating-type locus (MAT). [ 8 ] Under standard nomenclature, MAT1-1 (which may informally be called MAT1) encodes for a regulatory protein with an alpha box motif, while MAT1-2 (informally called MAT2) encodes for a protein with a high motility-group (HMG) DNA-binding motif, as in the yeast mating type MATα1. [ 9 ] The corresponding mating types in yeast, a non-filamentous ascomycete, are referred to as MATa and MATα. [ 10 ]
Mating type genes in ascomycetes are called idiomorphs rather than alleles due to the uncertainty of the origin by common descent. The proteins they encode are transcription factors which regulate both the early and late stages of the sexual cycle. Heterothallic ascomycetes produce gametes, which present a single Mat idiomorph, and syngamy will only be possible between gametes carrying complementary mating types. On the other hand, homothallic ascomycetes produce gametes that can fuse with every other gamete in the population (including its own mitotic descendants) most often because each haploid contains the two alternate forms of the Mat locus in its genome. [ 11 ]
Basidiomycetes can have thousands of different mating types. [ 12 ]
In the ascomycete Neurospora crassa matings are restricted to interaction of strains of opposite mating type. This promotes some degree of outcrossing. Outcrossing, through complementation , could provide the benefit of masking recessive deleterious mutations in genes which function in the dikaryon and/or diploid stage of the life cycle. [ 13 ]
Mating types likely predate anisogamy , [ 14 ] and sexes evolved directly from mating types or independently in some lineages. [ 15 ]
Studies on green algae have provided evidence for the evolutionary link between sexes and mating types. [ 16 ] In 2006 Japanese researchers found a gene in males of Pleodorina starrii that is an orthologue to a gene for a mating type in the Chlamydomonas reinhardtii . [ 17 ] In Volvocales , the plus mating type is the ancestor to female . [ 18 ]
In ciliates , multiple mating types evolved from binary mating types in several lineages. [ 19 ] : 75 As of 2019, genomic conflict has been considered the leading explanation for the evolution of two mating types. [ 20 ]
Secondary mating types evolved alongside simultaneous hermaphrodites in several lineages. [ 19 ] : 71 [ clarification needed ] | https://en.wikipedia.org/wiki/Mating_type |
In mathematics , a Diophantine equation is an equation of the form P ( x 1 , ..., x j , y 1 , ..., y k ) = 0 (usually abbreviated P ( x , y ) = 0) where P ( x , y ) is a polynomial with integer coefficients , where x 1 , ..., x j indicate parameters and y 1 , ..., y k indicate unknowns.
A Diophantine set is a subset S of N j {\displaystyle \mathbb {N} ^{j}} , the set of all j -tuples of natural numbers, so that for some Diophantine equation P ( x , y ) = 0,
That is, a parameter value is in the Diophantine set S if and only if the associated Diophantine equation is satisfiable under that parameter value. The use of natural numbers both in S and the existential quantification merely reflects the usual applications in computability theory and model theory . It does not matter whether natural numbers refer to the set of nonnegative integers or positive integers since the two definitions for Diophantine sets are equivalent. We can also equally well speak of Diophantine sets of integers and freely replace quantification over natural numbers with quantification over the integers. [ 1 ] Also it is sufficient to assume P is a polynomial over Q {\displaystyle \mathbb {Q} } and multiply P by the appropriate denominators to yield integer coefficients. However, whether quantification over rationals can also be substituted for quantification over the integers is a notoriously hard open problem. [ 2 ]
The MRDP theorem (so named for the initials of the four principal contributors to its solution) states that a set of integers is Diophantine if and only if it is computably enumerable . [ 3 ] A set of integers S is computably enumerable if and only if there is an algorithm that, when given an integer, halts if that integer is a member of S and runs forever otherwise. This means that the concept of general Diophantine set, apparently belonging to number theory , can be taken rather in logical or computability-theoretic terms. This is far from obvious, however, and represented the culmination of some decades of work.
Matiyasevich's completion of the MRDP theorem settled Hilbert's tenth problem . Hilbert's tenth problem [ 4 ] was to find a general algorithm that can decide whether a given Diophantine equation has a solution among the integers. While Hilbert's tenth problem is not a formal mathematical statement as such, the nearly universal acceptance of the (philosophical) identification of a decision algorithm with a total computable predicate allows us to use the MRDP theorem to conclude that the tenth problem is unsolvable.
In the following examples, the natural numbers refer to the set of positive integers.
The equation
is an example of a Diophantine equation with a parameter x and unknowns y 1 and y 2 . The equation has a solution in y 1 and y 2 precisely when x can be expressed as a product of two integers greater than 1, in other words x is a composite number . Namely, this equation provides a Diophantine definition of the set
consisting of the composite numbers.
Other examples of Diophantine definitions are as follows:
Matiyasevich's theorem, also called the Matiyasevich – Robinson – Davis – Putnam or MRDP theorem, says:
A set S of integers is computably enumerable if there is an algorithm such that: For each integer input n , if n is a member of S , then the algorithm eventually halts; otherwise it runs forever. That is equivalent to saying there is an algorithm that runs forever and lists the members of S . A set S is Diophantine precisely if there is some polynomial with integer coefficients f ( n , x 1 , ..., x k )
such that an integer n is in S if and only if there exist some integers x 1 , ..., x k such that f ( n , x 1 , ..., x k ) = 0.
Conversely, every Diophantine set is computably enumerable :
consider a Diophantine equation f ( n , x 1 , ..., x k ) = 0.
Now we make an algorithm that simply tries all possible values for n , x 1 , ..., x k (in, say, some simple order consistent with the increasing order of the sum of their absolute values),
and prints n every time f ( n , x 1 , ..., x k ) = 0.
This algorithm will obviously run forever and will list exactly the n for which f ( n , x 1 , ..., x k ) = 0 has a solution
in x 1 , ..., x k .
Yuri Matiyasevich utilized a method involving Fibonacci numbers , which grow exponentially , in order to show that solutions to Diophantine equations may grow exponentially. Earlier work by Julia Robinson , Martin Davis and Hilary Putnam – hence, MRDP – had shown that this suffices to show that every computably enumerable set is Diophantine.
Hilbert's tenth problem asks for a general algorithm deciding the solvability of Diophantine equations. The conjunction of Matiyasevich's result with the fact that most recursively enumerable languages are not decidable implies that a solution to Hilbert's tenth problem is impossible.
Later work has shown that the question of solvability of a Diophantine equation is undecidable even if the equation only has 9 natural number variables (Matiyasevich, 1977) or 11 integer variables ( Zhi Wei Sun , 1992).
Matiyasevich's theorem has since been used to prove that many problems from calculus and differential equations are unsolvable.
One can also derive the following stronger form of Gödel's first incompleteness theorem from Matiyasevich's result:
According to the incompleteness theorems , a powerful-enough consistent axiomatic theory is incomplete, meaning the truth of some of its propositions cannot be established within its formalism. The statement above says that this incompleteness must include the solvability of a diophantine equation, assuming that the theory in question is a number theory. | https://en.wikipedia.org/wiki/Matiyasevich's_theorem |
In algebra , Matlis duality is a duality between Artinian and Noetherian modules over a complete Noetherian local ring . In the special case when the local ring has a field [ clarification needed ] mapping to the residue field it is closely related to earlier work by Francis Sowerby Macaulay on polynomial rings and is sometimes called Macaulay duality , and the general case was introduced by Matlis ( 1958 ).
Suppose that R is a Noetherian complete local ring with residue field k , and choose E to be an injective hull of k (sometimes called a Matlis module ). The dual D R ( M ) of a module M is defined to be Hom R ( M , E ). Then Matlis duality states that the duality functor D R gives an anti-equivalence between the categories of Artinian and Noetherian R -modules. In particular the duality functor gives an anti-equivalence from the category of finite-length modules to itself.
Suppose that the Noetherian complete local ring R has a subfield k that maps onto a subfield of finite index of its residue field R / m . Then the Matlis dual of any R -module is just its dual as a topological vector space over k , if the module is given its m -adic topology. In particular the dual of R as a topological vector space over k is a Matlis module. This case is closely related to work of Macaulay on graded polynomial rings and is sometimes called Macaulay duality.
If R is a discrete valuation ring with quotient field K then the Matlis module is K / R . In the special case when R is the ring of p -adic numbers , the Matlis dual of a finitely-generated module is the Pontryagin dual of it considered as a locally compact abelian group .
If R is a Cohen–Macaulay local ring of dimension d with dualizing module Ω, then the Matlis module is given by the local cohomology group H d R (Ω). In particular if R is an Artinian local ring then the Matlis module is the same as the dualizing module.
Matlis duality can be conceptually explained using the language of adjoint functors and derived categories : [ 1 ] the functor between the derived categories of R - and k -modules induced by regarding a k -module as an R -module, admits a right adjoint
(derived internal Hom )
This right adjoint sends the injective hull E ( k ) {\displaystyle E(k)} mentioned above to k , which is a dualizing object in D ( k ) {\displaystyle D(k)} . This abstract fact then gives rise to the above-mentioned equivalence. | https://en.wikipedia.org/wiki/Matlis_duality |
Matriphagy is the consumption of the mother by her offspring . [ 1 ] [ 2 ] The behavior generally takes place within the first few weeks of life and has been documented in some species of insects , nematode worms , pseudoscorpions , and other arachnids as well as in caecilian amphibians . [ 3 ] [ 4 ] [ 5 ]
The specifics of how matriphagy occurs varies among different species. However, the process is best-described in the Desert spider, Stegodyphus lineatus , where the mother harbors nutritional resources for her young through food consumption. The mother can regurgitate small portions of food for her growing offspring , but between 1–2 weeks after hatching the progeny capitalize on this food source by eating her alive. Typically, offspring only feed on their biological mother as opposed to other females in the population. In other arachnid species, matriphagy occurs after the ingestion of nutritional eggs known as trophic eggs (e.g. Black lace-weaver Amaurobius ferox , Crab spider Australomisidia ergandros ). It involves different techniques for killing the mother, such as transfer of poison via biting and sucking to cause a quick death (e.g. Black lace-weaver) or continuous sucking of the hemolymph , resulting in a more gradual death (e.g. Crab spider ). The behavior is less well described but follows a similar pattern in species such as the Hump earwig , pseudoscorpions , and caecilians .
Spiders that engage in matriphagy produce offspring with higher weights, shorter and earlier moulting time, larger body mass at dispersal , and higher survival rates than clutches deprived of matriphagy. In some species, matriphagous offspring were also more successful at capturing large prey items and had a higher survival rate at dispersal. These benefits to offspring outweigh the cost of survival to the mothers and help ensure that her genetic traits are passed to the next generation, thus perpetuating the behavior. [ 6 ] [ 7 ] [ 8 ] [ 9 ]
Overall, matriphagy is an extreme form of parental care but is highly related to extended care in the Funnel-web spider, parental investment in caecilians, and gerontophagy in social spiders. The uniqueness of this phenomenon has led to several expanded analogies in human culture and contributed to the pervasive fear of spiders throughout society. [ 10 ] [ 11 ]
Matriphagy can be broken down into two components:
Matriphagy generally consists of offspring consuming their mother; however, different species exhibit different variations of this behavior.
In many Black lace-weavers, Amaurobius ferox , offspring do not immediately consume their mother. A day after offspring emerge from their eggs, their mother lays a set of trophic eggs , which contain nutrition for the offspring to consume. [ 12 ] Matriphagy commences days later when the mother begins communicating with her offspring through web vibrations, drumming, and jumping. [ 12 ] [ 13 ] [ 7 ] Through these behaviors, offspring are able to detect when and where they can consume their mother. They migrate towards her and a couple of the spiderlings jump onto her back to consume her. [ 12 ] In response, the mother jumps and drums more frequently to keep her offspring off of her, however, they relentlessly continue attempting to get onto her back. [ 12 ] When the mother feels ready, she presses her body onto her offspring and allows them to consume her via sucking on her insides. [ 12 ] As they consume her, they also release poison into her body, causing a quick death. [ 12 ] The mother's body is kept for a few weeks as a nutritional reserve. [ 12 ]
Interestingly, matriphagy in this species is dependent on the developmental stage that the offspring are currently at. [ 12 ] If offspring, older than four days, are given to an unrelated mother, they refuse to consume her. [ 12 ] However, if younger offspring are given to an unrelated mother, they readily consume her. [ 12 ] Additionally, if a mother loses her offspring, she is able to produce another clutch of offspring. [ 12 ]
Mothers of one particular Australian species of the crab spider , Australomisidia ergandros (formerly in genus Diaea ), are only able to lay one clutch, unlike the Black lace-weaver. [ 9 ] They invest a significant amount of time and energy into storing nutrients and food into large oocytes , known as trophic eggs , similar to the Black lace-weaver. [ 9 ] However, these trophic eggs are too large to physically leave her body. [ 9 ] Some of the nutrients from the trophic eggs are liquefied into haemolymph , which can be consumed through the mother's leg joints by her offspring. [ 9 ] She gradually shrinks until she becomes immobile and dies. [ 9 ] [ 12 ]
In this species, it has been shown that this behavior may contribute to reducing cannibalism by siblings. [ 9 ]
Right after hatching, the hatchlings of the desert spider Stegodyphus lineatus rely solely upon their mother to provide them with food and nutrients. Their mother does this by regurgitating her bodily fluids, which contain a mixture of nutrients for them to feed on. [ 13 ] [ 7 ]
This behavior begins during mating. Mating causes an increase in the mother's production of digestive enzymes to better digest her prey. Consequently, she is able to retain more nutrients for her offspring to consume later. The mother's midgut tissues start to slowly degrade during the incubation period of her eggs. After her offspring hatch, she regurgitates food for them to feed on with the help of her already-liquefied midgut tissues. Meanwhile, her midgut tissues continue to degrade into a liquid state to maximize the amount of nutrients from the mother's body that her offspring will be able to obtain. As degradation continues, nutritional vacuoles form within her abdomen to amass all of the nutrients. Consumption begins when her offspring puncture her abdomen to suck up the nutritional vacuoles. After approximately 2–3 hours, the mother's bodily fluids are completely consumed, and only her exoskeleton remains. [ 13 ]
This species is only able to have one clutch, which might explain why so much time and energy is spent on taking care of offspring. Furthermore, matriphagy can also occur between offspring and mothers who have recently laid eggs that are not related. [ 13 ]
Anechura harmandi is the only species of Earwigs that has been currently documented to exhibit matriphagy. Mothers in this particular species of Earwigs have been found to reproduce during colder temperatures. [ 14 ] [ 15 ] This is mainly for the purpose of avoiding predation and maximizing their offspring's survival, since females are unable to produce a second clutch. [ 14 ] [ 15 ] Due to the cold temperature, there is a scarcity of available nutrients when the offspring hatch, which is why the offspring end up consuming their mother. [ 14 ] [ 15 ]
Matriphagy in this species of Pseudoscorpions is usually observed during times of food scarcity. [ 16 ] After their offspring hatch, mothers exit their nests and wait to be consumed. [ 16 ] Offspring follow their mothers out of the nest where they grab onto her legs and proceed to feed through her leg joints, similar to that of Australomisidia ergandros . [ 16 ]
Females of this species are able to produce more than one clutch of offspring if their first clutch was unsuccessful. [ 16 ]
Matriphagy in this species has been predicted to prevent cannibalism between siblings as well. [ 16 ]
The adaptive value of matriphagy is based on the benefits provided to the offspring and the costs borne by the mother. [ 2 ] Functionally analyzing matriphagy in this manner sheds light on why this unusual and extreme form of care has evolved and been selected for.
Unlike other milder forms of parental care, matriphagy necessarily costs the mother her life. Nevertheless, matriphagy may serve the mother's reproductive fitness, considering reproductive output, egg sac development, and number of young reared. The key question is whether the mother would produce more surviving offspring by evading matriphagy and reproducing again or by engaging in matriphagy and produces only one clutch.
Matriphagy is one of the most extreme forms of parental care observed in the animal kingdom. However, in some species such as the Funnel-web spider Coelotes terrestris , matriphagy is only observed under certain conditions and extended maternal protection is the main method by which offspring receive care. In other organisms such as the African social velvet spider, Stegodyphus mimosarum and Caecilian amphibians, parental behavior closely related in form and function to matriphagy is used.
The ‘maternal social’ spider, Coelotes terrestris (Funnel-web spider) uses extended maternal care as a reproductive model for its offspring. Upon laying the egg sac , a C. terrestris mother stands guard and incubates the sac for 3 to 4 weeks. She stays with her young from the time of their emergence until dispersal approximately 5 to 6 weeks later. During the offsprings’ development, mothers will provide the spiderlings prey based on their levels of gregariousness. [ 17 ]
Protecting the egg sacs from predation and parasites yields a high benefit to cost ratio for the mothers. Fitness of the mother is highly correlated to offspring developmental state—a mother in better condition yields larger young that are better at surviving predation. The presence of the mother also protects the offspring against parasitism . In addition, the mother can keep feeding while guarding her progeny without any weight loss, allowing her to collect sufficient food for both herself and her offspring. [ 17 ]
Overall, costs of protecting the egg sac are low. Upon separation from egg sacs, 90% of females have the energy sustenance to lay new sacs, although it does induce a time loss of several weeks that could potentially affect reproductive success. [ 17 ]
In experimental conditions, costs arose if maternal care was not provided, with egg sacs drying out and developing molds , thus illustrating that maternal care is essential for survival. Experimental food-deprived broods reared by the mother induced matriphagy, where 77% of offspring consumed their mother upon birth. This suggests that matriphagy can exist under nutrient-limited conditions, but the costs generally outweigh the benefits when mothers have sufficient access to resources. [ 17 ]
Caecilian amphibians are worm-like in appearance, and mothers have thick epithelial skin layers. The skin on a caecilian mother is used for a form of parent-offspring nutrient transfer. In at least two species, Boulengerula taitana and Siphonops annulatus , the young feed on the mother's skin by tearing it off with their teeth. Because these two are not closely related, either this behaviour is more common than currently observed or it evolved independently. [ 18 ] The consumed skin then regenerates within a few days. [ 5 ]
The Taita African caecilian Boulengerula taitana is an oviparous (egg-laying) caecilian whose skin transforms in brooding females to supply nutrients to growing offspring. The offspring are born with specific dentition that they can use to peel and eat the outer epidermal layer of their mother's skin. Young move around their mother's bodies, using their lower jaws to lift and peel the mother's skin while vigorously pressing their heads against her abdomen . To account for this, the epidermis of brooding females can be up to twice the thickness of non-brooding females. [ 19 ]
Viviparous (developing in the mother) caecilians on the other hand, have specialized fetal dentition which can be used for scraping lipid -rich secretions and cellular materials from the maternal oviduct lining. The ringed caecilian Siphonops annulatus , an oviparous caecilian, exhibits characteristics similar to viviparous caecilians. Mothers have paler skin tones than non-attending females, suggesting that offspring feed on glandular secretions on the mother's skin—a process that resembles mammalian lactation . This scraping method is different from the peeling actions performed by oviparous caecilians. [ 19 ]
For both oviparous and viviparous caecilians, delayed investment is a common benefit. Providing nutrition through the skin allows for redirection of nutrients, yielding fewer and larger offspring than caecilians who only provide their offspring with yolk nutrients. [ 19 ] Rather than the mother sacrificing herself and solely being used for the offspring's nutrition, caecilian mothers supplement their offspring's growth; they provide enough nutrients for the offspring to survive, but not at the cost of their own life.
Stegodyphus mothers liquefy their inner organs and maternal tissue into food deposits. The African social velvet spider Stegodyphus mimosarum and the African social spider Stegodyphus dumicola are two social spider species that eat their mothers and other adult females, which is unique since social spiders do not tend to exhibit cannibalistic life history traits. In these specific spiders, deceased females are often found shriveled with shrunken abdomens. Offspring suck nutrients primarily from the dorsal part of the adult female's abdomen, and she may still be alive during this process. [ 20 ]
This behavior is not quite the same as matriphagy because Stegodyphus spiderlings are perfectly tolerant to other offspring, healthy conspecifics , and members of other species, suggesting that ordinary cannibalism is suppressed. Instead, the parental care exhibited is known as "gerontophagy", or the “consumption of old individuals” (geron = old person, phagy = to feed on). Gerontophagy is the final act of care for the offspring, and some offspring are found larger than others. This implies that some young spiders are already able to feed on prey by themselves and gerontophagy as a source of nutrition is supplemental rather than necessary. Thus, there exists the ‘cannibal's kin-dilemma’, which reveals a form of kin selection in social spiders. In this scenario, kin selection should counteract cannibalism of related individuals in social spiders, but any designated victim should prefer to be eaten by available close relatives. [ 20 ]
Those who have been exposed to matriphagy may be frightened by such a seemingly strange and bizarre natural behavior, especially since it is mainly observed in already feared organisms. Thus, matriphagy is often posed as perpetuation of a long held fear of arachnids in human society. [ 21 ]
In contrast, others may look to matriphagy as a leading example of purity, as it represents an instinctive form of altruism . Altruism in this case refers to an "intentional action ultimately for the welfare of others that entails at least the possibility of either no benefit or a loss to the actor," and is a highly popularized and desirable concept in many human cultures. [ 11 ] Matriphagy can be viewed as altruism, insofar as participating mothers "sacrifice" their survival for the welfare of their offspring. [ 11 ] Although participation in matriphagy is not truly an intentional action, mothers are nevertheless driven by natural selection pressures based on offspring fitness to engage in such behavior. [ 11 ] This in turn creates a cycle that perpetuates altruistic matriphagous behavior through generations. Such an example of altruism on a purely biological level differs severely from human standards of altruism, which are influenced by moral virtues such as rationality , trust , and reciprocity . [ 11 ] | https://en.wikipedia.org/wiki/Matriphagy |
Matrix-M is a vaccine adjuvant , a substance that is added to various vaccines to stimulate the immune response . [ 1 ] [ 2 ] [ 3 ] It was patented in 2020 by Novavax [ 4 ] and is composed of nanoparticles from saponins extracted from Quillaja saponaria (soapbark) trees, cholesterol , and phospholipids . [ 5 ] [ 6 ] [ 7 ] It is an immune stimulating complex ( ISCOM ), which are nanospheres formed when saponin is mixed with two types of fats. [ 8 ]
Matrix-M contains a complex mix of saponins extracted from the bark of soapbark trees ( Quillaia ) packaged into nanoparticles made of cholesterol and phospholipids. 15% of the nanoparticles are known as Matrix-C and contain saponins derived from "Fraction C" of the tree bark extract (mainly QS-21 ). Matrix-C has strong adjuvant activity but is also highly reactogenic (lethargy and lethality in mice). The remaining 85% are known as Matrix-A and contain "Fraction A" saponins. Matrix-A is a weaker adjuvant but is also very well tolerated. Combined, they form a strong adjuvant with acceptable reactogenecity. [ 9 ]
Packaging saponins into nanoparticles achieves three things: [ 9 ]
Forerunners to the Matrix-M technology include ISCOM (Morein et al. , 1984) and ISCOMATRIX (CSL Limited, 2012). [ 9 ]
Adjuvants increase the body's immune response to a vaccine by creating higher levels of antibodies . [ 10 ] They can either enhance, modulate, and/or prolong the body's immune response, reducing the number of vaccinations needed for immunization. [ 11 ]
The Matrix-M adjuvant is used in a number of vaccine candidates, including the malaria vaccine R21/Matrix-M, [ 1 ] [ 12 ] influenza vaccines , [ 2 ] and in the approved Novavax COVID-19 vaccine . [ 5 ] [ 13 ] In 2021, the R21/Matrix-M malaria vaccine candidate showed a 72% in sites with seasonal implementation and 67% in sites with age-based implementation in the modified per-protocol analysis. /> In influenza vaccine candidates, Matrix-M was shown to offer cross-protection against multiple strains of influenza . [ 13 ] [ 2 ] [ 3 ]
Novavax is also testing a combined flu and COVID-19 vaccine candidate with Matrix-M. [ 14 ]
This biology article is a stub . You can help Wikipedia by expanding it .
This article about medicinal chemistry is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Matrix-M |
In mass spectrometry , matrix-assisted laser desorption/ionization ( MALDI ) is an ionization technique that uses a laser energy-absorbing matrix to create ions from large molecules with minimal fragmentation. [ 1 ] It has been applied to the analysis of biomolecules ( biopolymers such as DNA , proteins , peptides and carbohydrates ) and various organic molecules (such as polymers , dendrimers and other macromolecules ), which tend to be fragile and fragment when ionized by more conventional ionization methods. It is similar in character to electrospray ionization (ESI) in that both techniques are relatively soft (low fragmentation) ways of obtaining ions of large molecules in the gas phase, though MALDI typically produces far fewer multi-charged ions [ 2 ] .
MALDI methodology is a three-step process. First, the sample is mixed with a suitable matrix material and applied to a metal plate. Second, a pulsed laser irradiates the sample, triggering ablation and desorption of the sample and matrix material. Finally, the analyte molecules are ionized by being protonated or deprotonated in the hot plume of ablated gases, and then they can be accelerated into whichever mass spectrometer is used to analyse them. [ 3 ]
The term matrix-assisted laser desorption ionization (MALDI) was coined in 1985 by Franz Hillenkamp , Michael Karas and their colleagues. [ 4 ] These researchers found that the amino acid alanine could be ionized more easily if it was mixed with the amino acid tryptophan and irradiated with a pulsed 266 nm laser. The tryptophan was absorbing the laser energy and helping to ionize the non-absorbing alanine. Peptides up to the 2843 Da peptide melittin could be ionized when mixed with this kind of "matrix". [ 5 ] The breakthrough for large molecule laser desorption ionization came in 1987 when Koichi Tanaka of Shimadzu Corporation and his co-workers used what they called the "ultra fine metal plus liquid matrix method" that combined 30 nm cobalt particles in glycerol with a 337 nm nitrogen laser for ionization. [ 6 ] Using this laser and matrix combination, Tanaka was able to ionize biomolecules as large as the 34,472 Da protein carboxypeptidase-A. Tanaka received one-quarter of the 2002 Nobel Prize in Chemistry for demonstrating that, with the proper combination of laser wavelength and matrix, a protein can be ionized. [ 7 ] Karas and Hillenkamp were subsequently able to ionize the 67 kDa protein albumin using a nicotinic acid matrix and a 266 nm laser. [ 8 ] Further improvements were realized through the use of a 355 nm laser and the cinnamic acid derivatives ferulic acid , caffeic acid and sinapinic acid as the matrix. [ 9 ] The availability of small and relatively inexpensive nitrogen lasers operating at 337 nm wavelength and the first commercial instruments introduced in the early 1990s brought MALDI to an increasing number of researchers. [ 10 ] Today, mostly organic matrices are used for MALDI mass spectrometry.
The matrix consists of crystallized molecules, of which the three most commonly used are sinapinic acid , α-cyano-4-hydroxycinnamic acid (α-CHCA, alpha-cyano or alpha-matrix) and 2,5-dihydroxybenzoic acid (DHB). [ 16 ] A solution of one of these molecules is made, often in a mixture of highly purified water and an organic solvent such as acetonitrile (ACN) or ethanol. A counter ion source such as trifluoroacetic acid (TFA) is usually added to generate the [M+H] ions. A good example of a matrix-solution would be 20 mg/mL sinapinic acid in ACN:water:TFA (50:50:0.1).
The identification of suitable matrix compounds is determined to some extent by trial and error, but they are based on some specific molecular design considerations. They are of a fairly low molecular weight (to allow easy vaporization), but are large enough (with a low enough vapor pressure) not to evaporate during sample preparation or while standing in the mass spectrometer. They are often acidic, therefore act as a proton source to encourage ionization of the analyte. Basic matrices have also been reported. [ 17 ] They have a strong optical absorption in either the UV or IR range, [ 18 ] so that they rapidly and efficiently absorb the laser irradiation. This efficiency is commonly associated with chemical structures incorporating several conjugated double bonds , as seen in the structure of cinnamic acid . They are functionalized with polar groups, allowing their use in aqueous solutions. They typically contain a chromophore .
The matrix solution is mixed with the analyte (e.g. protein -sample). A mixture of water and organic solvent allows both hydrophobic and water-soluble ( hydrophilic ) molecules to dissolve into the solution. This solution is spotted onto a MALDI plate (usually a metal plate designed for this purpose). The solvents vaporize, leaving only the recrystallized matrix, but now with analyte molecules embedded into MALDI crystals. The matrix and the analyte are said to be co-crystallized. Co-crystallization is a key issue in selecting a proper matrix to obtain a good quality mass spectrum of the analyte of interest.
In analysis of biological systems, inorganic salts, which are also part of protein extracts, interfere with the ionization process. The salts can be removed by solid phase extraction or by washing the dried-droplet MALDI spots with cold water. [ 19 ] Both methods can also remove other substances from the sample. The matrix-protein mixture is not homogeneous because the polarity difference leads to a separation of the two substances during co-crystallization. The spot diameter of the target is much larger than that of the laser, which makes it necessary to make many laser shots at different places of the target, to get the statistical average of the substance concentration within the target spot.
The matrix can be used to tune the instrument to ionize the sample in different ways. As mentioned above, acid-base like reactions are often utilized to ionize the sample, however, molecules with conjugated pi systems , such as naphthalene like compounds, can also serve as an electron acceptor and thus a matrix for MALDI/TOF. [ 20 ] This is particularly useful in studying molecules that also possess conjugated pi systems. [ 21 ] The most widely used application for these matrices is studying porphyrin -like compounds such as chlorophyll . These matrices have been shown to have better ionization patterns that do not result in odd fragmentation patterns or complete loss of side chains. [ 22 ] It has also been suggested that conjugated porphyrin like molecules can serve as a matrix and cleave themselves eliminating the need for a separate matrix compound. [ 23 ]
There are several variations of the MALDI technology and comparable instruments are today produced for very different purposes, from more academic and analytical, to more industrial and high throughput. The mass spectrometry field has expanded into requiring ultrahigh resolution mass spectrometry such as the FT-ICR instruments [ 24 ] [ 25 ] as well as more high-throughput instruments. [ 26 ] As many MALDI MS instruments can be bought with an interchangeable ionization source ( electrospray ionization , MALDI, atmospheric pressure ionization , etc.) the technologies often overlap and many times any soft ionization method could potentially be used. For more variations of soft ionization methods see: Soft laser desorption or Ion source .
MALDI techniques typically employ the use of UV lasers such as nitrogen lasers (337 nm) and frequency-tripled and quadrupled Nd:YAG lasers (355 nm and 266 nm respectively). [ 27 ]
Infrared laser wavelengths used for infrared MALDI include the 2.94 μm Er:YAG laser , mid-IR optical parametric oscillator , and 10.6 μm carbon dioxide laser . Although not as common, infrared lasers are used due to their softer mode of ionization. [ 28 ] IR-MALDI also has the advantage of greater material removal (useful for biological samples), less low-mass interference, and compatibility with other matrix-free laser desorption mass spectrometry methods.
The type of a mass spectrometer most widely used with MALDI is the time-of-flight mass spectrometer (TOF), mainly due to its large mass range. The TOF measurement procedure is also ideally suited to the MALDI ionization process since the pulsed laser takes individual 'shots' rather than working in continuous operation. MALDI-TOF instruments are often equipped with a reflectron (an "ion mirror") that reflects ions using an electric field. This increases the ion flight path, thereby increasing time of flight between ions of different m/z and increasing resolution. Modern commercial reflectron TOF instruments reach a resolving power m/Δm of 50,000 FWHM (full-width half-maximum, Δm defined as the peak width at 50% of peak height) or more. [ 29 ]
MALDI has been coupled with IMS -TOF MS to identify phosphorylated and non-phosphorylated peptides. [ 30 ] [ 31 ]
MALDI- FT-ICR MS has been demonstrated to be a useful technique where high resolution MALDI-MS measurements are desired. [ 32 ]
Atmospheric pressure (AP) matrix-assisted laser desorption/ionization (MALDI) is an ionization technique (ion source) that in contrast to vacuum MALDI operates at normal atmospheric environment. [ 33 ] The main difference between vacuum MALDI and AP-MALDI is the pressure in which the ions are created. In vacuum MALDI, ions are typically produced at 10 mTorr or less while in AP-MALDI ions are formed in atmospheric pressure. In the past, the main disadvantage of the AP-MALDI technique compared to the conventional vacuum MALDI has been its limited sensitivity; however, ions can be transferred into the mass spectrometer with high efficiency and attomole detection limits have been reported. [ 34 ] AP-MALDI is used in mass spectrometry (MS) in a variety of applications ranging from proteomics to drug discovery. Popular topics that are addressed by AP-MALDI mass spectrometry include: proteomics; mass analysis of DNA, RNA, PNA, lipids, oligosaccharides, phosphopeptides, bacteria, small molecules and synthetic polymers, similar applications as available also for vacuum MALDI instruments. The AP-MALDI ion source is easily coupled to an ion trap mass spectrometer [ 35 ] or any other MS system equipped with electrospray ionization (ESI) or nanoESI source.
MALDI with ionization at reduced pressure is known to produce mainly singly-charged ions (see "Ionization mechanism" below). In contrast, ionization at atmopsheric pressure can generate highly-charged analytes as was first shown for infrared [ 36 ] and later also for nitrogen lasers. [ 37 ] Multiple charging of analytes is of great importance, because it allows to measure high-molecular-weight compounds like proteins in instruments, which provide only smaller m/z detection ranges such as quadrupoles. Besides the pressure, the composition of the matrix is important to achieve this effect.
In aerosol mass spectrometry , one of the ionization techniques consists in firing a laser to individual droplets. These systems are called single particle mass spectrometers (SPMS) . [ 38 ] The sample may optionally be mixed with a MALDI matrix prior to aerosolization.
The laser is fired at the matrix crystals in the dried-droplet spot. The matrix absorbs the laser energy and it is thought that primarily the matrix is desorbed and ionized (by addition of a proton ) by this event. The hot plume produced during ablation contains many species: neutral and ionized matrix molecules, protonated and deprotonated matrix molecules, matrix clusters and nanodroplets . Ablated species may participate in the ionization of analyte, though the mechanism of MALDI is still debated. The matrix is then thought to transfer protons to the analyte molecules (e.g., protein molecules), thus charging the analyte. [ 39 ] An ion observed after this process will consist of the initial neutral molecule [M] with ions added or removed. This is called a quasimolecular ion, for example [M+H] + in the case of an added proton, [M+Na] + in the case of an added sodium ion, or [M-H] − in the case of a removed proton. MALDI is capable of creating singly charged ions or multiply charged ions ([M+nH] n+ ) depending on the nature of the matrix, the laser intensity, and/or the voltage used. Note that these are all even-electron species. Ion signals of radical cations (photoionized molecules) can be observed, e.g., in the case of matrix molecules and other organic molecules.
The gas phase proton transfer model, [ 3 ] implemented as the coupled physical and chemical dynamics (CPCD) model, [ 40 ] of UV laser MALDI postulates primary and secondary processes leading to ionization. [ 41 ] Primary processes involve initial charge separation through absorption of photons by the matrix and pooling of the energy to form matrix ion pairs. Primary ion formation occurs through absorption of a UV photon to create excited state molecules by
where S 0 is the ground electronic state, S 1 the first electronic excited state, and S n is a higher electronic excited state. [ 40 ] The product ions can be proton transfer or electron transfer ion pairs, indicated by M + and M − above. Secondary processes involve ion-molecule reactions to form analyte ions.
The lucky survivor model (cluster ionization mechanism [ 3 ] ) postulates that analyte molecules are incorporated in the matrix maintaining the charge state from solution. [ 42 ] [ 43 ] Ion formation occurs through charge separation upon fragmentation of laser ablated clusters. [ 3 ] Ions that are not neutralized by recombination with photoelectrons or counter ions are the so-called lucky survivors.
The thermal model postulates that the high temperature facilitates the proton transfer between matrix and analyte in melted matrix liquid. [ 44 ] Ion-to-neutral ratio is an important parameter to justify the theoretical model, and the mistaken citation of ion-to-neutral ratio could result in an erroneous determination of the ionization mechanism. [ 45 ] The model quantitatively predicts the increase in total ion intensity as a function of the concentration and proton affinity of the analytes, and the ion-to-neutral ratio as a function of the laser fluences. [ 46 ] [ 47 ] This model also suggests that metal ion adducts (e.g., [M+Na] + or [M+K] + ) are mainly generated from the thermally induced dissolution of salt. [ 48 ]
The matrix-assisted ionization (MAI) method uses matrix preparation similar to MALDI but does not require laser ablation to produce analyte ions of volatile or nonvolatile compounds. [ 49 ] Simply exposing the matrix with analyte to the vacuum of the mass spectrometer creates ions with nearly identical charge states to electrospray ionization. [ 50 ] It is suggested that there are likely mechanistic commonality between this process and MALDI. [ 43 ]
Ion yield is typically estimated to range from 10 −4 to 10 −7 , [ 51 ] with some experiments hinting to even lower yields of 10 −9 . [ 52 ] The issue of low ion yields had been addressed, already shortly after introduction of MALDI by various attempts, including post-ionization utilizing a second laser. [ 53 ] Most of these attempts showed only limited success, with low signal increases. This might be attributed to the fact that axial time-of-flight instruments were used, which operate at pressures in the source region of 10 −5 to 10 −6 , which results in rapid plume expansion with particle velocities of up to 1000 m/s. [ 54 ] In 2015, successful laser post-ionization was reported, using a modified MALDI source operated at an elevated pressure of ~3 mbar coupled to an orthogonal time-of-flight mass analyzer, and employing a wavelength-tunable post-ionization laser, operated at wavelength from 260 nm to 280 nm, below the two-photon ionization threshold of the matrices used, which elevated ion yields of several lipids and small molecules by up to three orders of magnitude. [ 55 ] This approach, called MALDI-2, due to the second laser, and the second MALDI-like ionization process, was afterwards adopted for other mass spectrometers, all equipped with sources operating in the low mbar range. [ 56 ] [ 57 ]
In proteomics , MALDI is used for the rapid identification of proteins isolated by using gel electrophoresis : SDS-PAGE , size exclusion chromatography , affinity chromatography , strong/weak ion exchange, isotope coded protein labeling (ICPL), and two-dimensional gel electrophoresis . Peptide mass fingerprinting is the most popular analytical application of MALDI-TOF mass spectrometers. MALDI TOF/TOF mass spectrometers are used to reveal amino acid sequence of peptides using post-source decay or high energy collision-induced dissociation (further use see mass spectrometry ).
MALDI-TOF have been used to characterise post-translational modifications . For example, it has been widely applied to study protein methylation and demethylation . [ 58 ] [ 59 ] However, care must be taken when studying post-translational modifications by MALDI-TOF. For example, it has been reported that loss of sialic acid has been identified in papers when dihydroxybenzoic acid (DHB) has been used as a matrix for MALDI MS analysis of glycosylated peptides. Using sinapinic acid, 4-HCCA and DHB as matrices, S. Martin studied loss of sialic acid in glycosylated peptides by metastable decay in MALDI/TOF in linear mode and reflector mode. [ 60 ] A group at Shimadzu Corporation derivatized the sialic acid by an amidation reaction as a way to improve detection sensitivity [ 61 ] and also demonstrated that ionic liquid matrix reduces a loss of sialic acid during MALDI/TOF MS analysis of sialylated oligosaccharides. [ 62 ] THAP, [ 63 ] DHAP, [ 64 ] and a mixture of 2-aza-2-thiothymine and phenylhydrazine [ 65 ] have been identified as matrices that could be used to minimize loss of sialic acid during MALDI MS analysis of glycosylated peptides. It has been reported that a reduction in loss of some post-translational modifications can be accomplished if IR MALDI is used instead of UV MALDI. [ 66 ]
Besides proteins, MALDI-TOF has also been applied to study lipids . [ 67 ] For example, it has been applied to study the catalytic reactions of phospholipases . [ 68 ] [ 69 ] In addition to lipids, oligonucleotides have also been characterised by MALDI-TOF. For example, in molecular biology, a mixture of 5-methoxysalicylic acid and spermine can be used as a matrix for oligonucleotides analysis in MALDI mass spectrometry, [ 70 ] for instance after oligonucleotide synthesis .
Some synthetic macromolecules, such as catenanes and rotaxanes , dendrimers and hyperbranched polymers , and other assemblies, have molecular weights extending into the thousands or tens of thousands, where most ionization techniques have difficulty producing molecular ions. MALDI is a simple and fast analytical method that can allow chemists to rapidly analyze the results of such syntheses and verify their results. [ citation needed ]
In polymer chemistry, MALDI can be used to determine the molar mass distribution . [ 71 ] Polymers with polydispersity greater than 1.2 are difficult to characterize with MALDI due to the signal intensity discrimination against higher mass oligomers. [ 72 ] [ 73 ] [ 74 ]
A good matrix for polymers is dithranol [ 75 ] or AgTFA . [ 76 ] The sample must first be mixed with dithranol and the AgTFA added afterwards; otherwise the sample will precipitate out of solution.
MALDI-TOF spectra are often used for the identification of microorganisms such as bacteria or fungi. A portion of a colony of the microbe in question is placed onto the sample target and overlaid with matrix. The mass spectra of expressed proteins generated are analyzed by dedicated software and compared with stored profiles for species determination in what is known as biotyping. It offers benefits to other immunological or biochemical procedures and has become a common method for species identification in clinical microbiological laboratories. [ 77 ] [ 78 ] Benefits of high resolution MALDI-MS performed on a Fourier transform ion cyclotron resonance mass spectrometry (also known as FT-MS) have been demonstrated for typing and subtyping viruses though single ion detection known as proteotyping, with a particular focus on influenza viruses. [ 79 ]
One main advantage over other microbiological identification methods is its ability to rapidly and reliably identify, at low cost, a wide variety of microorganisms directly from the selective medium used to isolate them. The absence of the need to purify the suspect or "presumptive" colony [ 80 ] allows for a much faster turn-around times. For example, it has been demonstrated that MALDI-TOF can be used to detect bacteria directly from blood cultures. [ 81 ]
Another advantage is the potential to predict antibiotic susceptibility of bacteria. A single mass spectral peak can predict methicillin resistance of Staphylococcus aureus . [ 82 ] MALDI can also detect carbapenemase of carbapenem-resistant enterobacteriaceae , [ 83 ] including Acinetobacter baumannii [ 84 ] and Klebsiella pneumoniae . [ 85 ] However, most proteins that mediate antibiotic resistance are larger than MALDI-TOF's 2000–20,000 Da range for protein peak interpretation and only occasionally, as in the 2011 Klebsiella pneumoniae carbapenemase (KPC) outbreak at the NIH, a correlation between a peak and resistance conferring protein can be made. [ 86 ]
MALDI-TOF spectra have been used for the detection and identification of various parasites such as trypanosomatids , [ 87 ] Leishmania [ 88 ] and Plasmodium . [ 89 ] In addition to these unicellular parasites, MALDI/TOF can be used for the identification of parasitic insects such as lice [ 90 ] or cercariae , the free-swimming stage of trematodes . [ 91 ]
MALDI-TOF spectra are often utilized in tandem with other analysis and spectroscopy techniques in the diagnosis of diseases. MALDI/TOF is a diagnostic tool with much potential because it allows for the rapid identification of proteins and changes to proteins without the cost or computing power of sequencing nor the skill or time needed to solve a crystal structure in X-ray crystallography . [ citation needed ]
One example of this is necrotizing enterocolitis (NEC), which is a devastating disease that affects the bowels of premature infants. The symptoms of NEC are very similar to those of sepsis , and many infants die awaiting diagnosis and treatment. MALDI/TOF was used to identify bacteria present in the fecal matter of NEC positive infants. This study focused on characterization of the fecal microbiota associated with NEC and did not address the mechanism of disease. There is hope that a similar technique could be used as a quick, diagnostic tool that would not require sequencing. [ 92 ]
Another example of the diagnostic power of MALDI/TOF is in the area of cancer . Pancreatic cancer remains one of the most deadly and difficult to diagnose cancers. [ 93 ] Impaired cellular signaling due to mutations in membrane proteins has been long suspected to contribute to pancreatic cancer. [ 94 ] MALDI/TOF has been used to identify a membrane protein associated with pancreatic cancer and at one point may even serve as an early detection technique. [ 95 ] [ non-primary source needed ]
MALDI/TOF can also potentially be used to dictate treatment as well as diagnosis. MALDI/TOF serves as a method for determining the drug resistance of bacteria, especially to β-lactams (Penicillin family). The MALDI/TOF detects the presence of carbapenemases, which indicates drug resistance to standard antibiotics. It is predicted that this could serve as a method for identifying a bacterium as drug resistant in as little as three hours. This technique could help physicians decide whether to prescribe more aggressive antibiotics initially. [ 96 ]
Following initial observations that some peptide-peptide complexes could survive MALDI deposition and ionization, [ 97 ] studies of large protein complexes using MALDI-MS have been reported. [ 98 ] [ 99 ]
While MALDI is a common technique for large macro-molecules, it is often possible to also analyze small molecules with mass below 1000 Da. The problem with small molecules is that of matrix effects, where signal interference, detector saturation, or suppression of the analyte signal is possible since the matrices often consists of small molecules themselves. The choice of matrix is highly dependent on what molecules are to be analyzed. [ 100 ] [ 101 ]
Due to MALDI being a soft ionization source, it is used on a wide veriety of biomolecules. This has led to it being used in new ways such as MALDI-imaging mass spectrometry. This technique allows for the imaging of the spacial distribution of biomolecules. [ 102 ] | https://en.wikipedia.org/wiki/Matrix-assisted_laser_desorption/ionization |
In chemical analysis , matrix refers to the components of a sample other than the analyte [ 1 ] of interest. The matrix can have a considerable effect on the way the analysis is conducted and the quality of the results are obtained; such effects are called matrix effects. [ 2 ] For example, the ionic strength of the solution can have an effect on the activity coefficients of the analytes. [ 3 ] [ 4 ] The most common approach for accounting for matrix effects is to build a calibration curve using standard samples with known analyte concentration and which try to approximate the matrix of the sample as much as possible. [ 2 ] This is especially important for solid samples where there is a strong matrix influence. [ 5 ] In cases with complex or unknown matrices, the standard addition method can be used. [ 3 ] In this technique, the response of the sample is measured and recorded, for example, using an electrode selective for the analyte. Then, a small volume of standard solution is added and the response is measured again. Ideally, the standard addition should increase the analyte concentration by a factor of 1.5 to 3, and several additions should be averaged. The volume of standard solution should be small enough to disturb the matrix as little as possible.
Matrix enhancement and suppression is frequently observed in modern analytical routines, such as GC , HPLC , and ICP.
Matrix effect is quantitated by the use of the following formula:
where
A(extract) is the peak area of analyte, when diluted with matrix extract.
A(standard) is the peak area of analyte in the absence of matrix.
The concentration of analyte in both standards should be the same. A matrix effect value close to 100 indicates absence of matrix influence. A matrix effect value of less than 100 indicates suppression, while a value larger than 100 is a sign of matrix enhancement.
An alternative definition of matrix effect utilizes the formula:
The advantages of this definition are that negative values indicates suppression, while positive values are a sign of matrix enhancement. Ideally, a value of 0 is related to the absence of matrix effect. | https://en.wikipedia.org/wiki/Matrix_(chemical_analysis) |
For certain applications in linear algebra , it is useful to know properties of the probability distribution of the largest eigenvalue of a finite sum of random matrices . Suppose { X k } {\displaystyle \{\mathbf {X} _{k}\}} is a finite sequence of random matrices. Analogous to the well-known Chernoff bound for sums of scalars, a bound on the following is sought for a given parameter t :
The following theorems answer this general question under various assumptions; these assumptions are named below by analogy to their classical, scalar counterparts. All of these theorems can be found in ( Tropp 2010 ), as the specific application of a general result which is derived below. A summary of related works is given.
Consider a finite sequence { A k } {\displaystyle \{\mathbf {A} _{k}\}} of fixed,
self-adjoint matrices with dimension d {\displaystyle d} , and let { ξ k } {\displaystyle \{\xi _{k}\}} be a finite sequence of independent standard normal or independent Rademacher random variables.
Then, for all t ≥ 0 {\displaystyle t\geq 0} ,
where
Consider a finite sequence { B k } {\displaystyle \{\mathbf {B} _{k}\}} of fixed matrices with dimension d 1 × d 2 {\displaystyle d_{1}\times d_{2}} , and let { ξ k } {\displaystyle \{\xi _{k}\}} be a finite sequence of independent standard normal or independent Rademacher random variables.
Define the variance parameter
Then, for all t ≥ 0 {\displaystyle t\geq 0} ,
The classical Chernoff bounds concern the sum of independent, nonnegative, and uniformly bounded random variables.
In the matrix setting, the analogous theorem concerns a sum of positive-semidefinite random matrices subjected to a uniform eigenvalue bound.
Consider a finite sequence { X k } {\displaystyle \{\mathbf {X} _{k}\}} of independent, random, self-adjoint matrices with dimension d {\displaystyle d} .
Assume that each random matrix satisfies
almost surely.
Define
Then
Consider a sequence { X k : k = 1 , 2 , … , n } {\displaystyle \{\mathbf {X} _{k}:k=1,2,\ldots ,n\}} of independent, random, self-adjoint matrices that satisfy
almost surely.
Compute the minimum and maximum eigenvalues of the average expectation,
Then
The binary information divergence is defined as
for a , u ∈ [ 0 , 1 ] {\displaystyle a,u\in [0,1]} .
In the scalar setting, Bennett and Bernstein inequalities describe the upper tail of a sum of independent, zero-mean random variables that are either bounded or subexponential . In the matrix
case, the analogous results concern a sum of zero-mean random matrices.
Consider a finite sequence { X k } {\displaystyle \{\mathbf {X} _{k}\}} of independent, random, self-adjoint matrices with dimension d {\displaystyle d} .
Assume that each random matrix satisfies
almost surely.
Compute the norm of the total variance,
Then, the following chain of inequalities holds for all t ≥ 0 {\displaystyle t\geq 0} :
The function h ( u ) {\displaystyle h(u)} is defined as h ( u ) = ( 1 + u ) log ( 1 + u ) − u {\displaystyle h(u)=(1+u)\log(1+u)-u} for u ≥ 0 {\displaystyle u\geq 0} .
Consider a sequence { X k } k = 1 n {\displaystyle \{\mathbf {X} _{k}\}_{k=1}^{n}} of independent and identically distributed random column vectors in R d {\displaystyle \mathbb {R} ^{d}} . Assume that each random vector satisfies ‖ X k ‖ 2 ≤ M {\displaystyle \Vert \mathbf {X} _{k}\Vert _{2}\leq M} almost surely, and ‖ E [ X k X k T ] ‖ 2 ≤ 1 {\displaystyle \Vert \mathbb {E} [\mathbf {X} _{k}\mathbf {X} _{k}^{T}]\Vert _{2}\leq 1} . Then, for all t ≥ 0 {\displaystyle t\geq 0} , [ 1 ]
Consider a finite sequence { X k } {\displaystyle \{\mathbf {X} _{k}\}} of independent, random, self-adjoint matrices with dimension d {\displaystyle d} .
Assume that
for p = 2 , 3 , 4 , … {\displaystyle p=2,3,4,\ldots } .
Compute the variance parameter,
Then, the following chain of inequalities holds for all t ≥ 0 {\displaystyle t\geq 0} :
Consider a finite sequence { Z k } {\displaystyle \{\mathbf {Z} _{k}\}} of independent, random, matrices with dimension d 1 × d 2 {\displaystyle d_{1}\times d_{2}} .
Assume that each random matrix satisfies
almost surely.
Define the variance parameter
Then, for all t ≥ 0 {\displaystyle t\geq 0}
holds.
The scalar version of Azuma's inequality states that a scalar martingale exhibits normal concentration about its mean value, and the scale for deviations is controlled by the total maximum squared range of the difference sequence.
The following is the extension in matrix setting.
Consider a finite adapted sequence { X k } {\displaystyle \{\mathbf {X} _{k}\}} of self-adjoint matrices with dimension d {\displaystyle d} , and a fixed sequence { A k } {\displaystyle \{\mathbf {A} _{k}\}} of self-adjoint matrices that satisfy
almost surely.
Compute the variance parameter
Then, for all t ≥ 0 {\displaystyle t\geq 0}
The constant 1/8 can be improved to 1/2 when there is additional information available. One case occurs when each summand X k {\displaystyle \mathbf {X} _{k}} is conditionally symmetric.
Another example requires the assumption that X k {\displaystyle \mathbf {X} _{k}} commutes almost surely with A k {\displaystyle \mathbf {A} _{k}} .
Placing addition assumption that the summands in Matrix Azuma are independent gives a matrix extension of Hoeffding's inequalities .
Consider a finite sequence { X k } {\displaystyle \{\mathbf {X} _{k}\}} of independent, random, self-adjoint matrices with dimension d {\displaystyle d} , and let { A k } {\displaystyle \{\mathbf {A} _{k}\}} be a sequence of fixed self-adjoint matrices.
Assume that each random matrix satisfies
almost surely.
Then, for all t ≥ 0 {\displaystyle t\geq 0}
where
An improvement of this result was established in ( Mackey et al. 2012 ):
for all t ≥ 0 {\displaystyle t\geq 0}
where
In scalar setting, McDiarmid's inequality provides one common way of bounding the differences by applying Azuma's inequality to a Doob martingale . A version of the bounded differences inequality holds in the matrix setting.
Let { Z k : k = 1 , 2 , … , n } {\displaystyle \{Z_{k}:k=1,2,\ldots ,n\}} be an independent, family of random variables, and let H {\displaystyle \mathbf {H} } be a function that maps n {\displaystyle n} variables to a self-adjoint matrix of dimension d {\displaystyle d} .
Consider a sequence { A k } {\displaystyle \{\mathbf {A} _{k}\}} of fixed self-adjoint matrices that satisfy
where z i {\displaystyle z_{i}} and z i ′ {\displaystyle z'_{i}} range over all possible values of Z i {\displaystyle Z_{i}} for each index i {\displaystyle i} .
Compute the variance parameter
Then, for all t ≥ 0 {\displaystyle t\geq 0}
where z = ( Z 1 , … , Z n ) {\displaystyle \mathbf {z} =(Z_{1},\ldots ,Z_{n})} .
An improvement of this result was established in ( Paulin, Mackey & Tropp 2013 ) (see also ( Paulin, Mackey & Tropp 2016 )):
for all t ≥ 0 {\displaystyle t\geq 0}
where z = ( Z 1 , … , Z n ) {\displaystyle \mathbf {z} =(Z_{1},\ldots ,Z_{n})} and σ 2 = ‖ ∑ k A k 2 ‖ . {\displaystyle \sigma ^{2}={\bigg \Vert }\sum _{k}\mathbf {A} _{k}^{2}{\bigg \Vert }.}
The first bounds of this type were derived by ( Ahlswede & Winter 2003 ). Recall the theorem above for self-adjoint matrix Gaussian and Rademacher bounds :
For a finite sequence { A k } {\displaystyle \{\mathbf {A} _{k}\}} of fixed,
self-adjoint matrices with dimension d {\displaystyle d} and for { ξ k } {\displaystyle \{\xi _{k}\}} a finite sequence of independent standard normal or independent Rademacher random variables, then
where
Ahlswede and Winter would give the same result, except with
By comparison, the σ 2 {\displaystyle \sigma ^{2}} in the theorem above commutes Σ {\displaystyle \Sigma } and λ max {\displaystyle \lambda _{\max }} ; that is, it is the largest eigenvalue of the sum rather than the sum of the largest eigenvalues. It is never larger than the Ahlswede–Winter value (by the norm triangle inequality ), but can be much smaller. Therefore, the theorem above gives a tighter bound than the Ahlswede–Winter result.
The chief contribution of ( Ahlswede & Winter 2003 ) was the extension of the Laplace-transform method used to prove the scalar Chernoff bound (see Chernoff bound#Additive form (absolute error) ) to the case of self-adjoint matrices. The procedure given in the derivation below. All of the recent works on this topic follow this same procedure, and the chief differences follow from subsequent steps. Ahlswede & Winter use the Golden–Thompson inequality to proceed, whereas Tropp ( Tropp 2010 ) uses Lieb's Theorem .
Suppose one wished to vary the length of the series ( n ) and the dimensions of the
matrices ( d ) while keeping the right-hand side approximately constant. Then
n must vary approximately as the log of d . Several papers have attempted to establish a bound without a dependence on dimensions. Rudelson and Vershynin ( Rudelson & Vershynin 2007 ) give a result for matrices which are the outer product of two vectors. ( Magen & Zouzias 2010 ) provide a result without the dimensional dependence for low rank matrices . The original result was derived independently from the Ahlswede–Winter approach, but ( Oliveira 2010b ) proves a similar result using the Ahlswede–Winter approach.
Finally, Oliveira ( Oliveira 2010a ) proves a result for matrix martingales independently from the Ahlswede–Winter framework. Tropp ( Tropp 2011 ) slightly improves on the result using the Ahlswede–Winter framework. Neither result is presented in this article.
The Laplace transform argument found in ( Ahlswede & Winter 2003 ) is a significant result in its own right:
Let Y {\displaystyle \mathbf {Y} } be a random self-adjoint matrix. Then
To prove this, fix θ > 0 {\displaystyle \theta >0} . Then
The second-to-last inequality is Markov's inequality . The last inequality holds since e λ max ( θ Y ) = λ max ( e θ Y ) ≤ tr ( e θ Y ) {\displaystyle e^{\lambda _{\max }(\theta \mathbf {Y} )}=\lambda _{\max }(e^{\theta \mathbf {Y} })\leq \operatorname {tr} (e^{\theta \mathbf {Y} })} . Since the left-most quantity is independent of θ {\displaystyle \theta } , the infimum over θ > 0 {\displaystyle \theta >0} remains an upper bound for it.
Thus, our task is to understand E [ tr ( e θ Y ) ] {\displaystyle \operatorname {E} [\operatorname {tr} (e^{\theta \mathbf {Y} })]} Nevertheless, since trace and expectation are both linear, we can commute them, so it is sufficient to consider E e θ Y := M Y ( θ ) {\displaystyle \operatorname {E} e^{\theta \mathbf {Y} }:=\mathbf {M} _{\mathbf {Y} }(\theta )} , which we call the matrix generating function. This is where the methods of ( Ahlswede & Winter 2003 ) and ( Tropp 2010 ) diverge. The immediately following presentation follows ( Ahlswede & Winter 2003 ).
The Golden–Thompson inequality implies that
Suppose Y = ∑ k X k {\displaystyle \mathbf {Y} =\sum _{k}\mathbf {X} _{k}} . We can find an upper bound for tr M Y ( θ ) {\displaystyle \operatorname {tr} \mathbf {M} _{\mathbf {Y} }(\theta )} by iterating this result. Noting that tr ( A B ) ≤ tr ( A ) λ max ( B ) {\displaystyle \operatorname {tr} (\mathbf {AB} )\leq \operatorname {tr} (\mathbf {A} )\lambda _{\max }(\mathbf {B} )} , then
Iterating this, we get
So far we have found a bound with an infimum over θ {\displaystyle \theta } . In turn, this can be bounded. At any rate, one can see how the Ahlswede–Winter bound arises as the sum of largest eigenvalues.
The major contribution of ( Tropp 2010 ) is the application of Lieb's theorem where ( Ahlswede & Winter 2003 ) had applied the Golden–Thompson inequality . Tropp's corollary is the following: If H {\displaystyle H} is a fixed self-adjoint matrix and X {\displaystyle X} is a random self-adjoint matrix, then
Proof: Let Y = e X {\displaystyle \mathbf {Y} =e^{\mathbf {X} }} . Then Lieb's theorem tells us that
is concave.
The final step is to use Jensen's inequality to move the expectation inside the function:
This gives us the major result of the paper: the subadditivity of the log of the matrix generating function.
Let X k {\displaystyle \mathbf {X} _{k}} be a finite sequence of independent, random self-adjoint matrices. Then for all θ ∈ R {\displaystyle \theta \in \mathbb {R} } ,
Proof: It is sufficient to let θ = 1 {\displaystyle \theta =1} . Expanding the definitions, we need to show that
To complete the proof, we use the law of total expectation . Let E k {\displaystyle \operatorname {E} _{k}} be the expectation conditioned on X 1 , … , X k {\displaystyle \mathbf {X} _{1},\ldots ,\mathbf {X} _{k}} . Since we assume all the X i {\displaystyle \mathbf {X} _{i}} are independent,
Define Ξ k = log E k − 1 e X k = log M X k ( θ ) {\displaystyle \mathbf {\Xi } _{k}=\log \operatorname {E} _{k-1}e^{\mathbf {X} _{k}}=\log \mathbf {M} _{\mathbf {X} _{k}}(\theta )} .
Finally, we have
where at every step m we use Tropp's corollary with
The following is immediate from the previous result:
All of the theorems given above are derived from this bound; the theorems consist in various ways to bound the infimum. These steps are significantly simpler than the proofs given. | https://en.wikipedia.org/wiki/Matrix_Chernoff_bound |
The Matrix Market exchange formats are a set of human readable , ASCII -based file formats designed to facilitate the exchange of matrix data. The file formats were designed and adopted for the Matrix Market, a NIST repository for test data for use in comparative studies of algorithms for numerical linear algebra . [ 1 ]
This software article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Matrix_Market_exchange_formats |
In mathematics , matrix addition is the operation of adding two matrices by adding the corresponding entries together.
For a vector , v → {\displaystyle {\vec {v}}\!} , adding two matrices would have the geometric effect of applying each matrix transformation separately onto v → {\displaystyle {\vec {v}}\!} , then adding the transformed vectors.
However, there are other operations that could also be considered addition for matrices, such as the direct sum and the Kronecker sum .
Two matrices must have an equal number of rows and columns to be added. [ 1 ] In which case, the sum of two matrices A and B will be a matrix which has the same number of rows and columns as A and B . The sum of A and B , denoted A + B , is computed by adding corresponding elements of A and B : [ 2 ] [ 3 ]
Or more concisely (assuming that A + B = C ): [ 4 ] [ 5 ]
For example:
Similarly, it is also possible to subtract one matrix from another, as long as they have the same dimensions. The difference of A and B , denoted A − B , is computed by subtracting elements of B from corresponding elements of A , and has the same dimensions as A and B . For example:
Another operation, which is used less often, is the direct sum (denoted by ⊕). The Kronecker sum is also denoted ⊕; the context should make the usage clear. The direct sum of any pair of matrices A of size m × n and B of size p × q is a matrix of size ( m + p ) × ( n + q ) defined as: [ 6 ] [ 2 ]
For instance,
The direct sum of matrices is a special type of block matrix . In particular, the direct sum of square matrices is a block diagonal matrix .
The adjacency matrix of the union of disjoint graphs (or multigraphs ) is the direct sum of their adjacency matrices. Any element in the direct sum of two vector spaces of matrices can be represented as a direct sum of two matrices.
In general, the direct sum of n matrices is: [ 2 ]
where the zeros are actually blocks of zeros (i.e., zero matrices).
The Kronecker sum is different from the direct sum, but is also denoted by ⊕. It is defined using the Kronecker product ⊗ and normal matrix addition. If A is n -by- n , B is m -by- m and I k {\displaystyle \mathbf {I} _{k}} denotes the k -by- k identity matrix then the Kronecker sum is defined by: | https://en.wikipedia.org/wiki/Matrix_addition |
In mathematics , particularly in linear algebra and applications, matrix analysis is the study of matrices and their algebraic properties. [ 1 ] Some particular topics out of many include; operations defined on matrices (such as matrix addition , matrix multiplication and operations derived from these), functions of matrices (such as matrix exponentiation and matrix logarithm , and even sines and cosines etc. of matrices), and the eigenvalues of matrices ( eigendecomposition of a matrix , eigenvalue perturbation theory). [ 2 ]
The set of all m × n matrices over a field F denoted in this article M mn ( F ) form a vector space . Examples of F include the set of rational numbers Q {\displaystyle \mathbb {Q} } , the real numbers R {\displaystyle \mathbb {R} } , and set of complex numbers C {\displaystyle \mathbb {C} } . The spaces M mn ( F ) and M pq ( F ) are different spaces if m and p are unequal, and if n and q are unequal; for instance M 32 ( F ) ≠ M 23 ( F ). Two m × n matrices A and B in M mn ( F ) can be added together to form another matrix in the space M mn ( F ):
and multiplied by a α in F , to obtain another matrix in M mn ( F ):
Combining these two properties, a linear combination of matrices A and B are in M mn ( F ) is another matrix in M mn ( F ):
where α and β are numbers in F .
Any matrix can be expressed as a linear combination of basis matrices, which play the role of the basis vectors for the matrix space. For example, for the set of 2 × 2 matrices over the field of real numbers, M 22 ( R ) {\displaystyle M_{22}(\mathbb {R} )} , one legitimate basis set of matrices is:
because any 2 × 2 matrix can be expressed as:
where a , b , c , d are all real numbers. This idea applies to other fields and matrices of higher dimensions.
The determinant of a square matrix is an important property. The determinant indicates if a matrix is invertible (i.e. the inverse of a matrix exists when the determinant is nonzero). Determinants are used for finding eigenvalues of matrices (see below), and for solving a system of linear equations (see Cramer's rule ).
An n × n matrix A has eigenvectors x and eigenvalues λ defined by the relation:
In words, the matrix multiplication of A followed by an eigenvector x (here an n -dimensional column matrix ), is the same as multiplying the eigenvector by the eigenvalue. For an n × n matrix, there are n eigenvalues. The eigenvalues are the roots of the characteristic polynomial :
where I is the n × n identity matrix .
Roots of polynomials, in this context the eigenvalues, can all be different, or some may be equal (in which case eigenvalue has multiplicity , the number of times an eigenvalue occurs). After solving for the eigenvalues, the eigenvectors corresponding to the eigenvalues can be found by the defining equation.
Two n × n matrices A and B are similar if they are related by a similarity transformation :
The matrix P is called a similarity matrix , and is necessarily invertible .
LU decomposition splits a matrix into a matrix product of an upper triangular matrix and a lower triangle matrix.
Since matrices form vector spaces, one can form axioms (analogous to those of vectors) to define a "size" of a particular matrix. The norm of a matrix is a positive real number.
For all matrices A and B in M mn ( F ), and all numbers α in F , a matrix norm, delimited by double vertical bars || ... ||, fulfills: [ note 1 ]
The Frobenius norm is analogous to the dot product of Euclidean vectors; multiply matrix elements entry-wise, add up the results, then take the positive square root :
It is defined for matrices of any dimension (i.e. no restriction to square matrices).
Matrix elements are not restricted to constant numbers, they can be mathematical variables .
A functions of a matrix takes in a matrix, and return something else (a number, vector, matrix, etc...).
A matrix valued function takes in something (a number, vector, matrix, etc...) and returns a matrix. | https://en.wikipedia.org/wiki/Matrix_analysis |
In mathematics , two square matrices A and B over a field are called congruent if there exists an invertible matrix P over the same field such that
where "T" denotes the matrix transpose . Matrix congruence is an equivalence relation .
Matrix congruence arises when considering the effect of change of basis on the Gram matrix attached to a bilinear form or quadratic form on a finite-dimensional vector space : two matrices are congruent if and only if they represent the same bilinear form with respect to different bases .
Note that Halmos defines congruence in terms of conjugate transpose (with respect to a complex inner product space ) rather than transpose, [ 1 ] but this definition has not been adopted by most other authors.
Sylvester's law of inertia states that two congruent symmetric matrices with real entries have the same numbers of positive, negative, and zero eigenvalues . That is, the number of eigenvalues of each sign is an invariant of the associated quadratic form. [ 2 ] | https://en.wikipedia.org/wiki/Matrix_congruence |
A matrix difference equation is a difference equation in which the value of a vector (or sometimes, a matrix) of variables at one point in time is related to its own value at one or more previous points in time, using matrices . [ 1 ] [ 2 ] The order of the equation is the maximum time gap between any two indicated values of the variable vector. For example,
is an example of a second-order matrix difference equation, in which x is an n × 1 vector of variables and A and B are n × n matrices. This equation is homogeneous because there is no vector constant term added to the end of the equation. The same equation might also be written as
or as
The most commonly encountered matrix difference equations are first-order.
An example of a nonhomogeneous first-order matrix difference equation is
with additive constant vector b . The steady state of this system is a value x * of the vector x which, if reached, would not be deviated from subsequently. x * is found by setting x t = x t −1 = x * in the difference equation and solving for x * to obtain
where I is the n × n identity matrix , and where it is assumed that [ I − A ] is invertible . Then the nonhomogeneous equation can be rewritten in homogeneous form in terms of deviations from the steady state:
The first-order matrix difference equation [ x t − x *] = A [ x t −1 − x *] is stable —that is, x t converges asymptotically to the steady state x * —if and only if all eigenvalues of the transition matrix A (whether real or complex) have an absolute value which is less than 1.
Assume that the equation has been put in the homogeneous form y t = Ay t −1 . Then we can iterate and substitute repeatedly from the initial condition y 0 , which is the initial value of the vector y and which must be known in order to find the solution:
and so forth, so that by mathematical induction the solution in terms of t is
Further, if A is diagonalizable, we can rewrite A in terms of its eigenvalues and eigenvectors , giving the solution as
where P is an n × n matrix whose columns are the eigenvectors of A (assuming the eigenvalues are all distinct) and D is an n × n diagonal matrix whose diagonal elements are the eigenvalues of A . This solution motivates the above stability result: A t shrinks to the zero matrix over time if and only if the eigenvalues of A are all less than unity in absolute value.
Starting from the n -dimensional system y t = Ay t −1 , we can extract the dynamics of one of the state variables, say y 1 . The above solution equation for y t shows that the solution for y 1, t is in terms of the n eigenvalues of A . Therefore the equation describing the evolution of y 1 by itself must have a solution involving those same eigenvalues. This description intuitively motivates the equation of evolution of y 1 , which is
where the parameters a i are from the characteristic equation of the matrix A :
Thus each individual scalar variable of an n -dimensional first-order linear system evolves according to a univariate n th-degree difference equation, which has the same stability property (stable or unstable) as does the matrix difference equation.
Matrix difference equations of higher order—that is, with a time lag longer than one period—can be solved, and their stability analyzed, by converting them into first-order form using a block matrix (matrix of matrices). For example, suppose we have the second-order equation
with the variable vector x being n × 1 and A and B being n × n . This can be stacked in the form
where I is the n × n identity matrix and 0 is the n × n zero matrix . Then denoting the 2 n × 1 stacked vector of current and once-lagged variables as z t and the 2 n × 2 n block matrix as L , we have as before the solution
Also as before, this stacked equation, and thus the original second-order equation, are stable if and only if all eigenvalues of the matrix L are smaller than unity in absolute value.
In linear-quadratic-Gaussian control , there arises a nonlinear matrix equation for the reverse evolution of a current-and-future-cost matrix , denoted below as H . This equation is called a discrete dynamic Riccati equation , and it arises when a variable vector evolving according to a linear matrix difference equation is controlled by manipulating an exogenous vector in order to optimize a quadratic cost function . This Riccati equation assumes the following, or a similar, form:
where H , K , and A are n × n , C is n × k , R is k × k , n is the number of elements in the vector to be controlled, and k is the number of elements in the control vector. The parameter matrices A and C are from the linear equation , and the parameter matrices K and R are from the quadratic cost function. See here for details.
In general this equation cannot be solved analytically for H t in terms of t ; rather, the sequence of values for H t is found by iterating the Riccati equation. However, it has been shown [ 3 ] that this Riccati equation can be solved analytically if R = 0 and n = k + 1 , by reducing it to a scalar rational difference equation ; moreover, for any k and n if the transition matrix A is nonsingular then the Riccati equation can be solved analytically in terms of the eigenvalues of a matrix, although these may need to be found numerically. [ 4 ]
In most contexts the evolution of H backwards through time is stable, meaning that H converges to a particular fixed matrix H * which may be irrational even if all the other matrices are rational. See also Stochastic control § Discrete time .
A related Riccati equation [ 5 ] is
in which the matrices X , A , B , C , E are all n × n . This equation can be solved explicitly. Suppose X t = N t D t − 1 , {\displaystyle \mathbf {X} _{t}=\mathbf {N} _{t}\mathbf {D} _{t}^{-1},} which certainly holds for t = 0 with N 0 = X 0 and with D 0 = I . Then using this in the difference equation yields
so by induction the form X t = N t D t − 1 {\displaystyle \mathbf {X} _{t}=\mathbf {N} _{t}\mathbf {D} _{t}^{-1}} holds for all t . Then the evolution of N and D can be written as
Thus by induction | https://en.wikipedia.org/wiki/Matrix_difference_equation |
In physics, particularly in quantum perturbation theory , the matrix element refers to the linear operator of a modified Hamiltonian using Dirac notation . It is in fact referring to the matrix elements of a Hamiltonian operator which serves the purpose of calculating transition probabilities between different quantum states.
The matrix element considers the effect of the newly modified Hamiltonian (i.e. the linear superposition of the unperturbed Hamiltonian plus interaction potential) on the quantum state .
Matrix elements are important in atomic , nuclear and particle physics .
In simple terms, we say that a Hamiltonian or some other operator/observable will cause a transition from an initial quantum state | i ⟩ {\displaystyle |i\rangle } to a final quantum state | f ⟩ {\displaystyle |f\rangle } if the following holds true: ⟨ f | H ^ | i ⟩ = M i , f ≠ 0 | ⟨ f | H ^ | i ⟩ | 2 = | M i , f | 2 , {\displaystyle {\begin{aligned}\langle f|{\hat {H}}|i\rangle =M^{i,f}\neq 0\\|\langle f|{\hat {H}}|i\rangle |^{2}=|M^{i,f}|^{2},\end{aligned}}} where the last line is the probability amplitude of transition caused by some operator H ^ {\displaystyle {\hat {H}}} and the matrix element M i , f {\displaystyle M^{i,f}} encapsulating this information. In effect, the calculation involves finding the matrix elements of the H-operator which gives this information of transition between two states. Examples of this can be seen in nuclear physics such as in the beta decay transition , neutrinoless double beta decay and double beta decay .
Consider an unperturbed initial system which can be represented by the following Hamiltonian operator: H ^ = − ℏ 2 2 m ∇ 2 + V , {\displaystyle {\hat {H}}={\frac {-\hbar ^{2}}{2m}}\nabla ^{2}+V,} where V is the potential energy of the system and m the mass of a particle. Solving the Schrödinger equation for a wave-function | Ψ ⟩ {\displaystyle |\Psi \rangle } for the set of separable solutions, we get the following eigenequation for the time-independent Schrödinger equation: H ^ ψ ( 0 ) | ψ n 0 ⟩ = E n 0 | ψ n 0 ⟩ . {\displaystyle {\hat {H}}_{\psi }^{(0)}|\psi _{n}^{0}\rangle =E_{n}^{0}|\psi _{n}^{0}\rangle .} Here, the superscripts represent the perturbation level of correction where 0 represents the unperturbed system and any integers n > 0 represent the levels of correction to the system (e.g. E n 1 {\displaystyle E_{n}^{1}} represents the first order correction to the eigenenergies due to the perturbation). The subscript on the Hamiltonian operator indicates the basis which the Hamiltonian matrix is represented. In this case, it is represented by the { | ψ n 0 ⟩ | n ∈ N } {\displaystyle \{|\psi _{n}^{0}\rangle |n\in \mathbb {N} \}} basis set. This allows us to put the Hamiltonian in a diagonal matrix form where the diagonal elements are the only non-zero matrix elements of this operator.
Due to the orthonormality of the eigenstates where ⟨ ψ m 0 | ψ n 0 ⟩ = δ n m {\displaystyle \langle \psi _{m}^{0}|\psi _{n}^{0}\rangle =\delta _{nm}} , we can easily observe that the off-diagonal matrix elements are zero: ⟨ ψ m 0 | H ^ ψ 0 | ψ n 0 ⟩ = E m δ n m . {\displaystyle \langle \psi _{m}^{0}|{\hat {H}}_{\psi }^{0}|\psi _{n}^{0}\rangle =E_{m}\delta _{nm}.} Physically, the matrix elements here represent the transition probability amplitude for a particle in eignestate n to transition to eigenstate m due to the interaction governed by the unperturbed Hamiltonian. Since in the unperturbed Hamiltonian system these eigenstates are uncoupled, H ^ ψ ( 0 ) {\displaystyle {\hat {H}}_{\psi }^{(0)}} will not cause any such transitions to occur. Mathematically, since all the off-diagonal elements are 0, the matrix elements calculated by the above expectation value will always return 0.
Suppose we perturb the system by a linear addition of a new interaction or perturbation Hamiltonian H ^ i n t {\displaystyle {\hat {H}}^{int}} , so our new Hamiltonian will take the following form: H ^ = H ^ ψ ( 0 ) + λ H ^ i n t , {\displaystyle {\hat {H}}={\hat {H}}_{\psi }^{(0)}+\lambda {\hat {H}}^{int},} where λ {\displaystyle \lambda } represents the perturbation strength and runs from 0 to 1. This new interaction Hamiltonian may have off-diagonal elements V' which are non-zero and thus making the same calculations as above for the matrix elements gives: ⟨ ψ m 0 | H ^ i n t | ψ n 0 ⟩ = V ′ . {\displaystyle \langle \psi _{m}^{0}|{\hat {H}}^{int}|\psi _{n}^{0}\rangle =V'.} This result states that by adding a perturbation to the unperturbed Hamiltonian, there is a non-zero probability chance of the eigenstates transitioning between one another. An example of this can be with the electron wave-function in the hydrogen atom and the electrons in helium . If the electron wave-function eigenstates can be represented by the unperturbed Hamiltonian, where V {\displaystyle V} -potential energy represents the proton-electron interaction; the helium atom's Hamiltonian will look similar to the hydrogen with the extra term that now represents the new electron-electron interactions since helium has one more electron than hydrogen. In fact, this electron-electron interaction can be succinctly represented by the H ^ i n t = V ′ {\displaystyle {\hat {H}}^{int}=V'} new interaction term. This additional perturbation of energy to the initial hydrogen system (the addition of a new electron) will cause the eigenstates to transition since the electromagentic interaction will cause coupling between the electrons.
This makes calculating the matrix elements of the interaction Hamiltonian very important for finding the energy levels and wave-functions of particles in a different atomic elements and nuclide. This many-body Hamiltonian problem becomes very complicated with the addition of more electrons, protons and neutrons as we go to other elements in the periodic table.
This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Matrix_element_(physics) |
Matrix factorization is a class of collaborative filtering algorithms used in recommender systems . Matrix factorization algorithms work by decomposing the user-item interaction matrix into the product of two lower dimensionality rectangular matrices. [ 1 ] This family of methods became widely known during the Netflix prize challenge due to its effectiveness as reported by Simon Funk in his 2006 blog post, [ 2 ] where he shared his findings with the research community. The prediction results can be improved by assigning different regularization weights to the latent factors based on items' popularity and users' activeness. [ 3 ]
The idea behind matrix factorization is to represent users and items in a lower dimensional latent space . Since the initial work by Funk in 2006 a multitude of matrix factorization approaches have been proposed for recommender systems. Some of the most used and simpler ones are listed in the following sections.
The original algorithm proposed by Simon Funk in his blog post [ 2 ] factorized the user-item rating matrix as the product of two lower dimensional matrices, the first one has a row for each user, while the second has a column for each item. The row or column associated to a specific user or item is referred to as latent factors . [ 4 ] Note that, in Funk MF no singular value decomposition is applied, it is a SVD-like machine learning model. [ 2 ] The predicted ratings can be computed as R ~ = H W {\displaystyle {\tilde {R}}=HW} , where R ~ ∈ R users × items {\displaystyle {\tilde {R}}\in \mathbb {R} ^{{\text{users}}\times {\text{items}}}} is the user-item rating matrix, H ∈ R users × latent factors {\displaystyle H\in \mathbb {R} ^{{\text{users}}\times {\text{latent factors}}}} contains the user's latent factors and W ∈ R latent factors × items {\displaystyle W\in \mathbb {R} ^{{\text{latent factors}}\times {\text{items}}}} the item's latent factors.
Specifically, the predicted rating user u will give to item i is computed as:
It is possible to tune the expressive power of the model by changing the number of latent factors. It has been demonstrated [ 5 ] that a matrix factorization with one latent factor is equivalent to a most popular or top popular recommender (e.g. recommends the items with the most interactions without any personalization). Increasing the number of latent factors will improve personalization, therefore recommendation quality, until the number of factors becomes too high, at which point the model starts to overfit and the recommendation quality will decrease. A common strategy to avoid overfitting is to add regularization terms to the objective function. [ 6 ] [ 7 ] Funk MF was developed as a rating prediction problem, therefore it uses explicit numerical ratings as user-item interactions.
All things considered, Funk MF minimizes the following objective function:
Where ‖ . ‖ F {\displaystyle \|.\|_{\rm {F}}} is defined to be the frobenius norm whereas the other norms might be either frobenius or another norm depending on the specific recommending problem. [ 8 ]
While Funk MF is able to provide very good recommendation quality, its ability to use only explicit numerical ratings as user-items interactions constitutes a limitation. Modern day recommender systems should exploit all available interactions both explicit (e.g. numerical ratings) and implicit (e.g. likes, purchases, skipped, bookmarked). To this end SVD++ was designed to take into account implicit interactions as well. [ 9 ] [ 10 ] Compared to Funk MF, SVD++ takes also into account user and item bias.
The predicted rating user u will give to item i is computed as:
Where μ {\displaystyle \mu } refers to the overall average rating over all items and b i {\displaystyle b_{i}} and b u {\displaystyle b_{u}} refers to the observed deviation of the item i and the user u respectively from the average. [ 11 ] SVD++ has however some disadvantages, with the main drawback being that this method is not model-based. This means that if a new user is added, the algorithm is incapable of modeling it unless the whole model is retrained. Even though the system might have gathered some interactions for that new user, its latent factors are not available and therefore no recommendations can be computed. This is an example of a cold-start problem, that is the recommender cannot deal efficiently with new users or items and specific strategies should be put in place to handle this disadvantage. [ 12 ]
A possible way to address this cold start problem is to modify SVD++ in order for it to become a model-based algorithm, therefore allowing to easily manage new items and new users.
As previously mentioned in SVD++ we don't have the latent factors of new users, therefore it is necessary to represent them in a different way. The user's latent factors represent the preference of that user for the corresponding item's latent factors, therefore user's latent factors can be estimated via the past user interactions. If the system is able to gather some interactions for the new user it is possible to estimate its latent factors.
Note that this does not entirely solve the cold-start problem, since the recommender still requires some reliable interactions for new users, but at least there is no need to recompute the whole model every time. It has been demonstrated that this formulation is almost equivalent to a SLIM model, [ 13 ] which is an item-item model based recommender.
With this formulation, the equivalent item-item recommender would be R ~ = R S = R W T W {\displaystyle {\tilde {R}}=RS=RW^{\rm {T}}W} . Therefore the similarity matrix is symmetric.
Asymmetric SVD aims at combining the advantages of SVD++ while being a model based algorithm, therefore being able to consider new users with a few ratings without needing to retrain the whole model. As opposed to the model-based SVD here the user latent factor matrix H is replaced by Q, which learns the user's preferences as function of their ratings. [ 14 ]
The predicted rating user u will give to item i is computed as:
r ~ u i = μ + b i + b u + ∑ f = 0 n factors ∑ j = 0 n items r u j Q j , f W f , i {\displaystyle {\tilde {r}}_{ui}=\mu +b_{i}+b_{u}+\sum _{f=0}^{\text{n factors}}\sum _{j=0}^{\text{n items}}r_{uj}Q_{j,f}W_{f,i}}
With this formulation, the equivalent item-item recommender would be R ~ = R S = R Q T W {\displaystyle {\tilde {R}}=RS=RQ^{\rm {T}}W} . Since matrices Q and W are different the similarity matrix is asymmetric, hence the name of the model.
A group-specific SVD can be an effective approach for the cold-start problem in many scenarios. [ 6 ] It clusters users and items based on dependency information and similarities in characteristics. Then once a new user or item arrives, we can assign a group label to it, and approximates its latent factor by the group effects (of the corresponding group). Therefore, although ratings associated with the new user or item are not necessarily available, the group effects provide immediate and effective predictions.
The predicted rating user u will give to item i is computed as:
Here v u {\displaystyle v_{u}} and j i {\displaystyle j_{i}} represent the group label of user u and item i , respectively, which are identical across members from the same group. And S and T are matrices of group effects. For example, for a new user u n e w {\displaystyle u_{new}} whose latent factor H u n e w {\displaystyle H_{u_{new}}} is not available, we can at least identify their group label v u n e w {\displaystyle v_{u_{new}}} , and predict their ratings as:
This provides a good approximation to the unobserved ratings.
In recent years many other matrix factorization models have been developed to exploit the ever increasing amount and variety of available interaction data and use cases. Hybrid matrix factorization algorithms are capable of merging explicit and implicit interactions [ 15 ] or both content and collaborative data [ 16 ] [ 17 ] [ 18 ]
In recent years a number of neural and deep-learning techniques have been proposed, some of which generalize traditional Matrix factorization algorithms via a non-linear neural architecture. [ 19 ] While deep learning has been applied to many different scenarios (context-aware, sequence-aware, social tagging, etc.), its real effectiveness when used in a simple Collaborative filtering scenario has been put into question. Systematic analysis of publications applying deep learning or neural methods to the top-k recommendation problem, published in top conferences (SIGIR, KDD, WWW, RecSys, IJCAI), has shown that on average less than 40% of articles are reproducible, with as little as 14% in some conferences. Overall the studies identify 26 articles, only 12 of them could be reproduced and 11 of them could be outperformed by much older and simpler properly tuned baselines. The articles also highlights a number of potential problems in today's research scholarship and call for improved scientific practices in that area. [ 20 ] [ 21 ] Similar issues have been spotted also in sequence-aware recommender systems. [ 22 ] | https://en.wikipedia.org/wiki/Matrix_factorization_(recommender_systems) |
In mathematics, a matrix factorization of a polynomial is a technique for factoring irreducible polynomials with matrices . David Eisenbud proved that every multivariate real-valued polynomial p without linear terms can be written as AB = pI , where A and B are square matrices and I is the identity matrix . [ 1 ] Given the polynomial p , the matrices A and B can be found by elementary methods. [ 2 ]
The polynomial x 2 + y 2 is irreducible over R [ x , y ], but can be written as
This polynomial -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Matrix_factorization_of_a_polynomial |
Matrix isolation is an experimental technique used in chemistry and physics . It generally involves a material being trapped within an unreactive matrix . A host matrix is a continuous solid phase in which guest particles (atoms, molecules, ions, etc.) are embedded. The guest is said to be isolated within the host matrix. Initially the term matrix-isolation was used to describe the placing of a chemical species in any unreactive material, often polymers or resins , but more recently has referred specifically to gases in low-temperature solids. A typical matrix isolation experiment involves a guest sample being diluted in the gas phase with the host material, usually a noble gas or nitrogen . This mixture is then deposited on a window that is cooled to below the melting point of the host gas. The sample may then be studied using various spectroscopic procedures.
The transparent window, on to which the sample is deposited, is usually cooled using a compressed helium or similar refrigerant. Experiments must be performed under a high vacuum to prevent contaminants from unwanted gases freezing to the cold window. Lower temperatures are preferred, due to the improved rigidity and "glassiness" of the matrix material. Noble gases such as argon are used not just because of their unreactivity but also because of their broad optical transparency in the solid state. Mono-atomic gases have relatively simple face-centered cubic (fcc) crystal structure , which can make interpretations of the site occupancy and crystal-field splitting of the guest easier. In some cases a reactive material, for example, methane , hydrogen or ammonia , may be used as the host material so that the reaction of the host with the guest species may be studied.
Using the matrix isolation technique, short-lived, highly-reactive species such as radical ions and reaction intermediates may be observed and identified by spectroscopic means. For example, the solid noble gas krypton can be used to form an inert matrix within which a reactive F 3 − ion can sit in chemical isolation. [ 1 ] The reactive species can either be generated outside (before deposition) the apparatus and then be condensed, inside the matrix (after deposition) by irradiating or heating a precursor, or by bringing together two reactants on the growing matrix surface. For the deposition of two species it can be crucial to control the contact time and temperature. In twin jet deposition the two species have a much shorter contact time (and lower temperature) than in merged jet . With concentric jet the contact time is adjustable. [ 2 ]
Within the host matrix, the rotation and translation of the guest particle is usually inhibited. Therefore, the matrix isolation technique may be used to simulate a spectrum of a species in the gas phase without rotational and translational interference. The low temperatures also help to produce simpler spectra, since only the lower electronic and vibrational quantum states are populated.
Especially infrared (IR) spectroscopy , which is used to investigate molecular vibration , benefits from the matrix isolation technique. For example, in the gas-phase IR spectrum of fluoroethane some spectral regions are very difficult to interpret, as vibrational quantum states heavily overlap with multiple rotational-vibrational quantum states. When fluoroethane is isolated in argon or neon matrices at low temperatures, the rotation of the fluoroethane molecule is inhibited. Because rotational-vibrational quantum states are quenched in the matrix isolation IR spectrum of fluoroethane, all vibrational quantum states can be identified. [ 3 ] This is especially useful for the validation of simulated infrared spectra that can be obtained from computational chemistry . [ 4 ]
Matrix isolation has its origins in the first half of the 20th century with the experiments by photo-chemists and physicists freezing samples in liquefied gases. The earliest isolation experiments involved the freezing of species in transparent, low temperature organic glasses , such as EPA (ether/isopentane/ethanol 5:5:2). The modern matrix isolation technique was developed extensively during the 1950s, in particular by George C. Pimentel . [ 5 ] He initially used higher-boiling inert gases like xenon and nitrogen as the host material, and is often said to be the "father of matrix isolation".
Laser vaporization in matrix isolation spectroscopy was first brought about in 1969 by Schaeffer and Pearson using a yttrium aluminum garnet (YAG) laser to vaporize carbon which reacted with hydrogen to produce acetylene. They also showed that laser-vaporized boron would react with HCl to create BCl 3 . In the 1970s, Koerner von Gustorf's lab used the technique to produce free metal atoms which were then deposited with organic substrates for use in organometallic chemistry . Spectroscopic studies were done on reactive intermediates in around the early 1980s by Bell Labs. They used laser-induced fluorescence to characterize multiple molecules like SnBi and SiC 2 . Smalley's group employed the use of this method with time-of-flight mass spectrometry by analyzing Al clusters. With the work of chemists like these, laser-vaporization in matrix isolation spectroscopy rose in popularity due to its ability to generate transients involving metals, alloys and semi-conductor molecules and clusters. [ 6 ] | https://en.wikipedia.org/wiki/Matrix_isolation |
Matrix mechanics is a formulation of quantum mechanics created by Werner Heisenberg , Max Born , and Pascual Jordan in 1925. It was the first conceptually autonomous and logically consistent formulation of quantum mechanics. Its account of quantum jumps supplanted the Bohr model 's electron orbits . It did so by interpreting the physical properties of particles as matrices that evolve in time. It is equivalent to the Schrödinger wave formulation of quantum mechanics, as manifest in Dirac 's bra–ket notation .
In some contrast to the wave formulation, it produces spectra of (mostly energy) operators by purely algebraic, ladder operator methods. [ 1 ] Relying on these methods, Wolfgang Pauli derived the hydrogen atom spectrum in 1926, [ 2 ] before the development of wave mechanics.
In 1925, Werner Heisenberg , Max Born , and Pascual Jordan formulated the matrix mechanics representation of quantum mechanics.
In 1925 Werner Heisenberg was working in Göttingen on the problem of calculating the spectral lines of hydrogen . By May 1925 he began trying to describe atomic systems by observables only. On June 7, after weeks of failing to alleviate his hay fever with aspirin and cocaine, [ 3 ] Heisenberg left for the pollen-free North Sea island of Helgoland . While there, in between climbing and memorizing poems from Goethe 's West-östlicher Diwan , he continued to ponder the spectral issue and eventually realised that adopting non-commuting observables might solve the problem. He later wrote:
It was about three o' clock at night when the final result of the calculation lay before me. At first I was deeply shaken. I was so excited that I could not think of sleep. So I left the house and awaited the sunrise on the top of a rock. [ 4 ] : 275
After Heisenberg returned to Göttingen, he showed Wolfgang Pauli his calculations, commenting at one point:
Everything is still vague and unclear to me, but it seems as if the electrons will no more move on orbits. [ 5 ]
On July 9 Heisenberg gave the same paper of his calculations to Max Born, saying that "he had written a crazy paper and did not dare to send it in for publication, and that Born should read it and advise him" prior to publication. Heisenberg then departed for a while, leaving Born to analyse the paper. [ 6 ]
In the paper, Heisenberg formulated quantum theory without sharp electron orbits. Hendrik Kramers had earlier calculated the relative intensities of spectral lines in the Sommerfeld model by interpreting the Fourier coefficients of the orbits as intensities. But his answer, like all other calculations in the old quantum theory , was only correct for large orbits .
Heisenberg, after a collaboration with Kramers, [ 7 ] began to understand that the transition probabilities were not quite classical quantities, because the only frequencies that appear in the Fourier series should be the ones that are observed in quantum jumps, not the fictional ones that come from Fourier-analyzing sharp classical orbits. He replaced the classical Fourier series with a matrix of coefficients, a fuzzed-out quantum analog of the Fourier series. Classically, the Fourier coefficients give the intensity of the emitted radiation , so in quantum mechanics the magnitude of the matrix elements of the position operator were the intensity of radiation in the bright-line spectrum. The quantities in Heisenberg's formulation were the classical position and momentum, but now they were no longer sharply defined. Each quantity was represented by a collection of Fourier coefficients with two indices, corresponding to the initial and final states. [ 8 ]
When Born read the paper, he recognized the formulation as one which could be transcribed and extended to the systematic language of matrices , [ 9 ] which he had learned from his study under Jakob Rosanes [ 10 ] at Breslau University . Born, with the help of his assistant and former student Pascual Jordan, began immediately to make the transcription and extension, and they submitted their results for publication; the paper was received for publication just 60 days after Heisenberg's paper. [ 11 ]
A follow-on paper was submitted for publication before the end of the year by all three authors. [ 12 ] (A brief review of Born's role in the development of the matrix mechanics formulation of quantum mechanics along with a discussion of the key formula involving the non-commutativity of the probability amplitudes can be found in an article by Jeremy Bernstein . [ 13 ] A detailed historical and technical account can be found in Mehra and Rechenberg's book The Historical Development of Quantum Theory. Volume 3. The Formulation of Matrix Mechanics and Its Modifications 1925–1926. [ 14 ] )
The three fundamental papers:
Up until this time, matrices were seldom used by physicists; they were considered to belong to the realm of pure mathematics. Gustav Mie had used them in a paper on electrodynamics in 1912 and Born had used them in his work on the lattices theory of crystals in 1921. While matrices were used in these cases, the algebra of matrices with their multiplication did not enter the picture as they did in the matrix formulation of quantum mechanics. [ 15 ]
Born, however, had learned matrix algebra from Rosanes, as already noted, but Born had also learned Hilbert's theory of integral equations and quadratic forms for an infinite number of variables as was apparent from a citation by Born of Hilbert's work Grundzüge einer allgemeinen Theorie der Linearen Integralgleichungen published in 1912. [ 16 ] [ 17 ]
Jordan, too, was well equipped for the task. For a number of years, he had been an assistant to Richard Courant at Göttingen in the preparation of Courant and David Hilbert 's book Methoden der mathematischen Physik I , which was published in 1924. [ 18 ] This book, fortuitously, contained a great many of the mathematical tools necessary for the continued development of quantum mechanics.
In 1926, John von Neumann became assistant to David Hilbert, and he would coin the term Hilbert space to describe the algebra and analysis which were used in the development of quantum mechanics. [ 19 ] [ 20 ]
A linchpin contribution to this formulation was achieved in Dirac's reinterpretation/synthesis paper of 1925, [ 21 ] which invented the language and framework usually employed today, in full display of the noncommutative structure of the entire construction.
Before matrix mechanics, the old quantum theory described the motion of a particle by a classical orbit, with well defined position and momentum X ( t ) , P ( t ) , with the restriction that the time integral over one period T of the momentum times the velocity must be a positive integer multiple of the Planck constant ∫ 0 T P d X d t d t = ∫ 0 T P d X = n h . {\displaystyle \int _{0}^{T}P\;{\frac {dX}{dt}}\;dt=\int _{0}^{T}P\;dX=nh.} While this restriction correctly selects orbits with more or less the
right energy values E n , the old quantum mechanical formalism did not describe time dependent processes, such as the emission or absorption of radiation.
When a classical particle is weakly coupled to a radiation field, so that the radiative damping can be neglected, it will emit radiation in a pattern that repeats itself every orbital period . The frequencies that make up the outgoing wave are then integer multiples of the orbital frequency, and this is a reflection of the fact that X ( t ) is periodic, so that its Fourier representation has frequencies 2 πn / T only. X ( t ) = ∑ n = − ∞ ∞ e 2 π i n t / T X n . {\displaystyle X(t)=\sum _{n=-\infty }^{\infty }e^{2\pi int/T}X_{n}.} The coefficients X n are complex numbers . The ones with negative frequencies must be the complex conjugates of the ones with positive frequencies, so that X ( t ) will always be real, X n = X − n ∗ . {\displaystyle X_{n}=X_{-n}^{*}.}
A quantum mechanical particle, on the other hand, cannot emit radiation continuously; it can only emit photons. Assuming that the quantum particle started in orbit number n , emitted a photon, then ended up in orbit number m , the energy of the photon is E n − E m , which means that its frequency is E n − E m / h .
For large n and m , but with n − m relatively small, these are the classical frequencies by Bohr 's correspondence principle E n − E m ≈ h ( n − m ) T . {\displaystyle E_{n}-E_{m}\approx {\frac {h(n-m)}{T}}.} In the formula above, T is the classical period of either orbit n or orbit m , since the difference between them is higher order in h . But for small n and m , or if n − m is large, the frequencies are not integer multiples of any single frequency.
Since the frequencies that the particle emits are the same as the frequencies in the Fourier description of its motion, this suggests that something in the time-dependent description of the particle is oscillating with frequency E n − E m / h . Heisenberg called this quantity X nm ,
and demanded that it should reduce to the classical Fourier coefficients in the classical limit. For large values of n and m but with n − m relatively small, X nm is the ( n − m ) th Fourier coefficient of the classical motion at orbit n . Since X nm has opposite frequency to X mn , the condition that X is real becomes X n m = X m n ∗ . {\displaystyle X_{nm}=X_{mn}^{*}.}
By definition, X nm only has the frequency E n − E m / h , so its time evolution is simple: X n m ( t ) = e 2 π i ( E n − E m ) t / h X n m ( 0 ) = e i ( E n − E m ) t / ℏ X n m ( 0 ) . {\displaystyle X_{nm}(t)=e^{2\pi i(E_{n}-E_{m})t/h}X_{nm}(0)=e^{i(E_{n}-E_{m})t/\hbar }X_{nm}(0).} This is the original form of Heisenberg's equation of motion.
Given two arrays X nm and P nm describing two physical quantities, Heisenberg could form a new array of the same type by combining the terms X nk P km , which also oscillate with the right frequency. Since the Fourier coefficients of the product of two quantities is the convolution of the Fourier coefficients of each one separately, the correspondence with Fourier series allowed Heisenberg to deduce the rule by which the arrays should be multiplied, ( X P ) m n = ∑ k = 0 ∞ X m k P k n . {\displaystyle (XP)_{mn}=\sum _{k=0}^{\infty }X_{mk}P_{kn}.}
Born pointed out that this is the law of matrix multiplication , so that the position, the momentum, the energy, all the observable quantities in the theory, are interpreted as matrices. Under this multiplication rule, the product depends on the order: XP is different from PX .
The X matrix is a complete description of the motion of a quantum mechanical particle. Because the frequencies in the quantum motion are not multiples of a common frequency, the matrix elements cannot be interpreted as the Fourier coefficients of a sharp classical trajectory . Nevertheless, as matrices, X ( t ) and P ( t ) satisfy the classical equations of motion; also see Ehrenfest's theorem, below.
When it was introduced by Werner Heisenberg, Max Born and Pascual Jordan in 1925, matrix mechanics was not immediately accepted and was a source of controversy, at first. Schrödinger's later introduction of wave mechanics was greatly favored.
Part of the reason was that Heisenberg's formulation was in an odd mathematical language, for the time, while Schrödinger's formulation was based on familiar wave equations. But there was also a deeper sociological reason. Quantum mechanics had been developing by two paths, one led by Einstein, who emphasized the wave–particle duality he proposed for photons, and the other led by Bohr, that emphasized the discrete energy states and quantum jumps that Bohr discovered. De Broglie had reproduced the discrete energy states within Einstein's framework – the quantum condition is the standing wave condition, and this gave hope to those in the Einstein school that all the discrete aspects of quantum mechanics would be subsumed into a continuous wave mechanics.
Matrix mechanics, on the other hand, came from the Bohr school, which was concerned with discrete energy states and quantum jumps. Bohr's followers did not appreciate physical models that pictured electrons as waves, or as anything at all. They preferred to focus on the quantities that were directly connected to experiments.
In atomic physics, spectroscopy gave observational data on atomic transitions arising from the interactions of atoms with light quanta . The Bohr school required that only those quantities that were in principle measurable by spectroscopy should appear in the theory. These quantities include the energy levels and their intensities but they do not include the exact location of a particle in its Bohr orbit. It is very hard to imagine an experiment that could determine whether an electron in the ground state of a hydrogen atom is to the right or to the left of the nucleus. It was a deep conviction that such questions did not have an answer.
The matrix formulation was built on the premise that all physical observables are represented by matrices, whose elements are indexed by two different energy levels. [ 22 ] The set of eigenvalues of the matrix were eventually understood to be the set of all possible values that the observable can have. Since Heisenberg's matrices are Hermitian , the eigenvalues are real.
If an observable is measured and the result is a certain eigenvalue, the corresponding eigenvector is the state of the system immediately after the measurement. The act of measurement in matrix mechanics collapses the state of the system. If one measures two observables simultaneously, the state of the system collapses to a common eigenvector of the two observables. Since most matrices don't have any eigenvectors in common, most observables can never be measured precisely at the same time. This is the uncertainty principle .
If two matrices share their eigenvectors, they can be simultaneously diagonalized. In the basis where they are both diagonal, it is clear that their product does not depend on their order because multiplication of diagonal matrices is just multiplication of numbers. The uncertainty principle, by contrast, is an expression of the fact that often two matrices A and B do not always commute, i.e., that AB − BA does not necessarily equal 0. The fundamental commutation relation of matrix mechanics, ∑ k ( X n k P k m − P n k X k m ) = i ℏ δ n m {\displaystyle \sum _{k}\left(X_{nk}P_{km}-P_{nk}X_{km}\right)=i\hbar \,\delta _{nm}} implies then that there are no states that simultaneously have a definite position and momentum .
This principle of uncertainty holds for many other pairs of observables as well. For example, the energy does not commute with the position either, so it is impossible to precisely determine the position and energy of an electron in an atom.
In 1928, Albert Einstein nominated Heisenberg, Born, and Jordan for the Nobel Prize in Physics . [ 23 ] The announcement of the Nobel Prize in Physics for 1932 was delayed until November 1933. [ 24 ] It was at that time that it was announced Heisenberg had won the Prize for 1932 "for the creation of quantum mechanics, the application of which has, inter alia , led to the discovery of the allotropic forms of hydrogen" [ 25 ] and Erwin Schrödinger and Paul Adrien Maurice Dirac shared the 1933 Prize "for the discovery of new productive forms of atomic theory". [ 25 ]
It might well be asked why Born was not awarded the Prize in 1932, along with Heisenberg, and Bernstein proffers speculations on this matter. One of them relates to Jordan joining the Nazi Party on May 1, 1933, and becoming a stormtrooper . [ 26 ] Jordan's Party affiliations and Jordan's links to Born may well have affected Born's chance at the Prize at that time. Bernstein further notes that when Born finally won the Prize in 1954, Jordan was still alive, while the Prize was awarded for the statistical interpretation of quantum mechanics, attributable to Born alone. [ 27 ]
Heisenberg's reactions to Born for Heisenberg receiving the Prize for 1932 and for Born receiving the Prize in 1954 are also instructive in evaluating whether Born should have shared the Prize with Heisenberg. On November 25, 1933, Born received a letter from Heisenberg in which he said he had been delayed in writing due to a "bad conscience" that he alone had received the Prize "for work done in Göttingen in collaboration – you, Jordan and I". Heisenberg went on to say that Born and Jordan's contribution to quantum mechanics cannot be changed by "a wrong decision from the outside". [ 28 ]
In 1954, Heisenberg wrote an article honoring Max Planck for his insight in 1900. In the article, Heisenberg credited Born and Jordan for the final mathematical formulation of matrix mechanics and Heisenberg went on to stress how great their contributions were to quantum mechanics, which were not "adequately acknowledged in the public eye". [ 29 ]
Once Heisenberg introduced the matrices for X and P , he could find their matrix elements in special cases by guesswork, guided by the correspondence principle. Since the matrix elements are the quantum mechanical analogs of Fourier coefficients of the classical orbits, the simplest case is the harmonic oscillator , where the classical position and momentum, X ( t ) and P ( t ) , are sinusoidal.
In units where the mass and frequency of the oscillator are equal to one (see nondimensionalization ), the energy of the oscillator is H = 1 2 ( P 2 + X 2 ) . {\displaystyle H={\tfrac {1}{2}}\left(P^{2}+X^{2}\right).}
The level sets of H are the clockwise orbits, and they are nested circles in phase space. The classical orbit with energy E is X ( t ) = 2 E cos ( t ) , P ( t ) = − 2 E sin ( t ) . {\displaystyle X(t)={\sqrt {2E}}\cos(t),\qquad P(t)=-{\sqrt {2E}}\sin(t)~.}
The old quantum condition dictates that the integral of P dX over an orbit, which is the area of the circle in phase space, must be an integer multiple of the Planck constant . The area of the circle of radius √ 2 E is 2 πE . So E = n h 2 π = n ℏ , {\displaystyle E={\frac {nh}{2\pi }}=n\hbar \,,} or, in natural units where ħ = 1 , the energy is an integer.
The Fourier components of X ( t ) and P ( t ) are simple, and more so if they are combined into the quantities A ( t ) = X ( t ) + i P ( t ) = 2 E e − i t , A † ( t ) = X ( t ) − i P ( t ) = 2 E e i t . {\displaystyle A(t)=X(t)+iP(t)={\sqrt {2E}}\,e^{-it},\quad A^{\dagger }(t)=X(t)-iP(t)={\sqrt {2E}}\,e^{it}.} Both A and A † have only a single frequency, and X and P can be recovered from their sum and difference.
Since A ( t ) has a classical Fourier series with only the lowest frequency, and the matrix element A mn is the ( m − n ) th Fourier coefficient of the classical orbit, the matrix for A is nonzero only on the line just above the diagonal, where it is equal to √ 2 E n . The matrix for A † is likewise only nonzero on the line below the diagonal, with the same elements. Thus, from A and A † , reconstruction yields 2 X ( 0 ) = ℏ [ 0 1 0 0 0 ⋯ 1 0 2 0 0 ⋯ 0 2 0 3 0 ⋯ 0 0 3 0 4 ⋯ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ ] , {\displaystyle {\sqrt {2}}X(0)={\sqrt {\hbar }}\;{\begin{bmatrix}0&{\sqrt {1}}&0&0&0&\cdots \\{\sqrt {1}}&0&{\sqrt {2}}&0&0&\cdots \\0&{\sqrt {2}}&0&{\sqrt {3}}&0&\cdots \\0&0&{\sqrt {3}}&0&{\sqrt {4}}&\cdots \\\vdots &\vdots &\vdots &\vdots &\vdots &\ddots \\\end{bmatrix}},} and 2 P ( 0 ) = ℏ [ 0 − i 1 0 0 0 ⋯ i 1 0 − i 2 0 0 ⋯ 0 i 2 0 − i 3 0 ⋯ 0 0 i 3 0 − i 4 ⋯ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ ] , {\displaystyle {\sqrt {2}}P(0)={\sqrt {\hbar }}\;{\begin{bmatrix}0&-i{\sqrt {1}}&0&0&0&\cdots \\i{\sqrt {1}}&0&-i{\sqrt {2}}&0&0&\cdots \\0&i{\sqrt {2}}&0&-i{\sqrt {3}}&0&\cdots \\0&0&i{\sqrt {3}}&0&-i{\sqrt {4}}&\cdots \\\vdots &\vdots &\vdots &\vdots &\vdots &\ddots \\\end{bmatrix}},} which, up to the choice of units, are the Heisenberg matrices for the harmonic oscillator. Both matrices are Hermitian , since they are constructed from the Fourier coefficients of real quantities.
Finding X ( t ) and P ( t ) is direct, since they are quantum Fourier coefficients so they evolve simply with time, X m n ( t ) = X m n ( 0 ) e i ( E m − E n ) t , P m n ( t ) = P m n ( 0 ) e i ( E m − E n ) t . {\displaystyle X_{mn}(t)=X_{mn}(0)e^{i(E_{m}-E_{n})t},\quad P_{mn}(t)=P_{mn}(0)e^{i(E_{m}-E_{n})t}~.}
The matrix product of X and P is not hermitian, but has a real and imaginary part. The real part is one half the symmetric expression XP + PX , while the imaginary part is proportional to the commutator [ X , P ] = ( X P − P X ) . {\displaystyle [X,P]=(XP-PX).} It is simple to verify explicitly that XP − PX in the case of the harmonic oscillator, is iħ , multiplied by the identity .
It is likewise simple to verify that the matrix H = 1 2 ( X 2 + P 2 ) {\displaystyle H={\tfrac {1}{2}}\left(X^{2}+P^{2}\right)} is a diagonal matrix , with eigenvalues E i .
The harmonic oscillator is an important case. Finding the matrices is easier than determining the general conditions from these special forms. For this reason, Heisenberg investigated the anharmonic oscillator , with Hamiltonian H = 1 2 P 2 + 1 2 X 2 + ε X 3 . {\displaystyle H={\tfrac {1}{2}}P^{2}+{\tfrac {1}{2}}X^{2}+\varepsilon X^{3}~.}
In this case, the X and P matrices are no longer simple off-diagonal matrices, since the corresponding classical orbits are slightly squashed and displaced, so that they have Fourier coefficients at every classical frequency. To determine the matrix elements, Heisenberg required that the classical equations of motion be obeyed as matrix equations, d X d t = P , d P d t = − X − 3 ε X 2 . {\displaystyle {\frac {dX}{dt}}=P~,\qquad {\frac {dP}{dt}}=-X-3\varepsilon X^{2}~.}
He noticed that if this could be done, then H , considered as a matrix function of X and P , will have zero time derivative. d H d t = P ∗ d P d t + ( X + 3 ε X 2 ) ∗ d X d t = 0 , {\displaystyle {\frac {dH}{dt}}=P*{\frac {dP}{dt}}+\left(X+3\varepsilon X^{2}\right)*{\frac {dX}{dt}}=0~,} where A ∗ B is the anticommutator , A ∗ B = 1 2 ( A B + B A ) . {\displaystyle A*B={\tfrac {1}{2}}(AB+BA)~.}
Given that all the off diagonal elements have a nonzero frequency; H being constant implies that H is diagonal.
It was clear to Heisenberg that in this system, the energy could be exactly conserved in an arbitrary quantum system, a very encouraging sign.
The process of emission and absorption of photons seemed to demand that the conservation of energy will hold at best on average. If a wave containing exactly one photon passes over some atoms, and one of them absorbs it, that atom needs to tell the others that they can't absorb the photon anymore. But if the atoms are far apart, any signal cannot reach the other atoms in time, and they might end up absorbing the same photon anyway and dissipating the energy to the environment. When the signal reached them, the other atoms would have to somehow recall that energy. This paradox led Bohr, Kramers and Slater to abandon exact conservation of energy. Heisenberg's formalism, when extended to include the electromagnetic field, was obviously going to sidestep this problem, a hint that the interpretation of the theory will involve wavefunction collapse .
Demanding that the classical equations of motion are preserved is not a strong enough condition to determine the matrix elements. The Planck constant does not appear in the classical equations, so that the matrices could be constructed for many different values of ħ and still satisfy the equations of motion, but with different energy levels.
So, in order to implement his program, Heisenberg needed to use the old quantum condition to fix the energy levels, then fill in the matrices with Fourier coefficients of the classical equations, then alter the matrix coefficients and the energy levels slightly to make sure the classical equations are satisfied. This is clearly not satisfactory. The old quantum conditions refer to the area enclosed by the sharp classical orbits, which do not exist in the new formalism.
The most important thing that Heisenberg discovered is how to translate the old quantum condition into a simple statement in matrix mechanics.
To do this, he investigated the action integral as a matrix quantity, ∫ 0 T ∑ k P m k ( t ) d X k n d t d t ≈ ? J m n . {\displaystyle \int _{0}^{T}\sum _{k}P_{mk}(t){\frac {dX_{kn}}{dt}}dt\,\,{\stackrel {\scriptstyle ?}{\approx }}\,\,J_{mn}~.}
There are several problems with this integral, all stemming from the incompatibility of the matrix formalism with the old picture of orbits. Which period T should be used? Semiclassically , it should be either m or n , but the difference is order ħ , and an answer to order ħ is sought. The quantum condition tells us that J mn is 2 πn on the diagonal, so the fact that J is classically constant tells us that the off-diagonal elements are zero.
His crucial insight was to differentiate the quantum condition with respect to n . This idea only makes complete sense in the classical limit, where n is not an integer but the continuous action variable J , but Heisenberg performed analogous manipulations with matrices, where the intermediate expressions are sometimes discrete differences and sometimes derivatives.
In the following discussion, for the sake of clarity, the differentiation will be performed on the classical variables, and the transition to matrix mechanics will be done afterwards, guided by the correspondence principle.
In the classical setting, the derivative is the derivative with respect to J of the integral which defines J , so it is tautologically equal to 1. d d J ∫ 0 T P d X = 1 = ∫ 0 T d t ( d P d J d X d t + P d d J d X d t ) = ∫ 0 T d t ( d P d J d X d t − d P d t d X d J ) {\displaystyle {\begin{aligned}{}{\frac {d}{dJ}}\int _{0}^{T}PdX&=1\\&=\int _{0}^{T}dt\left({\frac {dP}{dJ}}{\frac {dX}{dt}}+P{\frac {d}{dJ}}{\frac {dX}{dt}}\right)\\&=\int _{0}^{T}dt\left({\frac {dP}{dJ}}{\frac {dX}{dt}}-{\frac {dP}{dt}}{\frac {dX}{dJ}}\right)\end{aligned}}} where the derivatives dP / dJ and dX / dJ should be interpreted as differences with respect to J at corresponding times on nearby orbits, exactly what would be obtained if the Fourier coefficients of the orbital motion were differentiated. (These derivatives are symplectically orthogonal in phase space to the time derivatives dP / dt and dX / dt ).
The final expression is clarified by introducing the variable canonically conjugate to J , which is called the angle variable θ : The derivative with respect to time is a derivative with respect to θ , up to a factor of 2 πT , 2 π T ∫ 0 T d t ( d P d J d X d θ − d P d θ d X d J ) = 1 . {\displaystyle {\frac {2\pi }{T}}\int _{0}^{T}dt\left({\frac {dP}{dJ}}{\frac {dX}{d\theta }}-{\frac {dP}{d\theta }}{\frac {dX}{dJ}}\right)=1\,.} So the quantum condition integral is the average value over one cycle of the Poisson bracket of X and P .
An analogous differentiation of the Fourier series of P dX demonstrates that the off-diagonal elements of the Poisson bracket are all zero. The Poisson bracket of two canonically conjugate variables, such as X and P , is the constant value 1, so this integral really is the average value of 1; so it is 1, as we knew all along, because it is dJ / dJ after all. But Heisenberg, Born and Jordan, unlike Dirac, were not familiar with the theory of Poisson brackets, so, for them, the differentiation effectively evaluated { X, P } in J , θ coordinates.
The Poisson Bracket, unlike the action integral, does have a simple translation to matrix mechanics – it normally corresponds to the imaginary part of the product of two variables, the commutator .
To see this, examine the (antisymmetrized) product of two matrices A and B in the correspondence limit, where the matrix elements are slowly varying functions of the index, keeping in mind that the answer is zero classically.
In the correspondence limit, when indices m , n are large and nearby, while k , r are small, the rate of change of the matrix elements in the diagonal direction is the matrix element of the J derivative of the corresponding classical quantity. So it is possible to shift any matrix element diagonally through the correspondence, A ( m + r ) ( n + r ) − A m n ≈ r ( d A d J ) m n {\displaystyle A_{(m+r)(n+r)}-A_{mn}\approx r\;\left({\frac {dA}{dJ}}\right)_{mn}} where the right hand side is really only the ( m − n ) th Fourier component of dA / dJ at the orbit near m to this semiclassical order, not a full well-defined matrix.
The semiclassical time derivative of a matrix element is obtained up to a factor of i by multiplying by the distance from the diagonal, i k A m ( m + k ) ≈ ( T 2 π d A d t ) m ( m + k ) = ( d A d θ ) m ( m + k ) . {\displaystyle ikA_{m(m+k)}\approx \left({\frac {T}{2\pi }}{\frac {dA}{dt}}\right)_{m(m+k)}=\left({\frac {dA}{d\theta }}\right)_{m(m+k)}\,.} since the coefficient A m ( m + k ) is semiclassically the k th Fourier coefficient of the m th classical orbit.
The imaginary part of the product of A and B can be evaluated by shifting the matrix elements around so as to reproduce the classical answer, which is zero.
The leading nonzero residual is then given entirely by the shifting. Since all the matrix elements are at indices which have a small distance from the large index position ( m , m ) , it helps to introduce two temporary notations: A [ r , k ] = A ( m + r )( m + k ) for the matrices, and dA / dJ [ r ] for the r th Fourier components of classical quantities, ( A B − B A ) [ 0 , k ] = ∑ r = − ∞ ∞ ( A [ 0 , r ] B [ r , k ] − A [ r , k ] B [ 0 , r ] ) = ∑ r ( A [ − r + k , k ] + ( r − k ) d A d J [ r ] ) ( B [ 0 , k − r ] + r d B d J [ r − k ] ) − ∑ r A [ r , k ] B [ 0 , r ] . {\displaystyle {\begin{aligned}(AB-BA)[0,k]&=\sum _{r=-\infty }^{\infty }{\bigl (}A[0,r]B[r,k]-A[r,k]B[0,r]{\bigr )}\\&=\sum _{r}\left(A[-r+k,k]+(r-k){\frac {dA}{dJ}}[r]\right)\left(B[0,k-r]+r{\frac {dB}{dJ}}[r-k]\right)-\sum _{r}A[r,k]B[0,r]\,.\end{aligned}}}
Flipping the summation variable in the first sum from r to r ′ = k − r , the matrix element becomes, ∑ r ′ ( A [ r ′ , k ] − r ′ d A d J [ k − r ′ ] ) ( B [ 0 , r ′ ] + ( k − r ′ ) d B d J [ r ′ ] ) − ∑ r A [ r , k ] B [ 0 , r ] {\displaystyle \sum _{r'}\left(A[r',k]-r'{\frac {dA}{dJ}}[k-r']\right)\left(B[0,r']+(k-r'){\frac {dB}{dJ}}[r']\right)-\sum _{r}A[r,k]B[0,r]} and it is clear that the principal (classical) part cancels.
The leading quantum part, neglecting the higher order product of derivatives in the residual expression, is then equal to ∑ r ′ ( d B d J [ r ′ ] ( k − r ′ ) A [ r ′ , k ] − d A d J [ k − r ′ ] r ′ B [ 0 , r ′ ] ) {\displaystyle \sum _{r'}\left({\frac {dB}{dJ}}[r'](k-r')A[r',k]-{\frac {dA}{dJ}}[k-r']r'B[0,r']\right)} so that, finally, ( A B − B A ) [ 0 , k ] = ∑ r ′ ( d B d J [ r ′ ] i d A d θ [ k − r ′ ] − d A d J [ k − r ′ ] i d B d θ [ r ′ ] ) {\displaystyle (AB-BA)[0,k]=\sum _{r'}\left({\frac {dB}{dJ}}[r']i{\frac {dA}{d\theta }}[k-r']-{\frac {dA}{dJ}}[k-r']i{\frac {dB}{d\theta }}[r']\right)} which can be identified with i times the k th classical Fourier component of the Poisson bracket.
Heisenberg's original differentiation trick was eventually extended to a full semiclassical derivation of the quantum condition, in collaboration with Born and Jordan.
Once they were able to establish that i ℏ { X , P } P B ⟼ [ X , P ] ≡ X P − P X = i ℏ , {\displaystyle i\hbar \{X,P\}_{\mathrm {PB} }\qquad \longmapsto \qquad [X,P]\equiv XP-PX=i\hbar \,,} this condition replaced and extended the old quantization rule, allowing the matrix elements of P and X for an arbitrary system to be determined simply from the form of the Hamiltonian.
The new quantization rule was assumed to be universally true , even though the derivation from the old quantum theory required semiclassical reasoning.
(A full quantum treatment, however, for more elaborate arguments of the brackets, was appreciated in the 1940s to amount to extending Poisson brackets to Moyal brackets .)
To make the transition to standard quantum mechanics, the most important further addition was the quantum state vector , now written | ψ ⟩ ,
which is the vector that the matrices act on. Without the state vector, it is not clear which particular motion the Heisenberg matrices are describing, since they include all the motions somewhere.
The interpretation of the state vector, whose components are written ψ m , was furnished by Born. This interpretation is statistical: the result of a measurement of the physical quantity corresponding to the matrix A is random, with an average value equal to ∑ m n ψ m ∗ A m n ψ n . {\displaystyle \sum _{mn}\psi _{m}^{*}A_{mn}\psi _{n}\,.} Alternatively, and equivalently, the state vector gives the probability amplitude ψ n for the quantum system to be in the energy state n .
Once the state vector was introduced, matrix mechanics could be rotated to any basis , where the H matrix need no longer be diagonal. The Heisenberg equation of motion in its original form states that A mn evolves in time like a Fourier component, A m n ( t ) = e i ( E m − E n ) t A m n ( 0 ) , {\displaystyle A_{mn}(t)=e^{i(E_{m}-E_{n})t}A_{mn}(0)~,} which can be recast in differential form d A m n d t = i ( E m − E n ) A m n , {\displaystyle {\frac {dA_{mn}}{dt}}=i(E_{m}-E_{n})A_{mn}~,} and it can be restated so that it is true in an arbitrary basis, by noting that the H matrix is diagonal with diagonal values E m , d A d t = i ( H A − A H ) . {\displaystyle {\frac {dA}{dt}}=i(HA-AH)~.} This is now a matrix equation, so it holds in any basis. This is the modern form of the Heisenberg equation of motion.
Its formal solution is: A ( t ) = e i H t A ( 0 ) e − i H t . {\displaystyle A(t)=e^{iHt}A(0)e^{-iHt}~.}
All these forms of the equation of motion above say the same thing, that A ( t ) is equivalent to A (0) , through a basis rotation by the unitary matrix e iHt , a systematic picture elucidated by Dirac in his bra–ket notation.
Conversely, by rotating the basis for the state vector at each time by e iHt , the time dependence in the matrices can be undone. The matrices are now time independent, but the state vector rotates, | ψ ( t ) ⟩ = e − i H t | ψ ( 0 ) ⟩ , d | ψ ⟩ d t = − i H | ψ ⟩ . {\displaystyle |\psi (t)\rangle =e^{-iHt}|\psi (0)\rangle ,\qquad {\frac {d|\psi \rangle }{dt}}=-iH|\psi \rangle \,.} This is the Schrödinger equation for the state vector, and this time-dependent change of basis amounts to transformation to the Schrödinger picture , with ⟨ x | ψ ⟩ = ψ ( x ) .
In quantum mechanics in the Heisenberg picture the state vector , | ψ ⟩ does not change with time, while an observable A satisfies the Heisenberg equation of motion ,
d A d t = i ℏ [ H , A ] + ∂ A ∂ t . {\displaystyle {\frac {dA}{dt}}={\frac {i}{\hbar }}[H,A]+{\frac {\partial A}{\partial t}}~.}
The extra term is for operators such as A = ( X + t 2 P ) {\displaystyle A=\left(X+t^{2}P\right)} which have an explicit time dependence , in addition to the time dependence from the unitary evolution discussed.
The Heisenberg picture does not distinguish time from space, so it is better suited to relativistic theories than the Schrödinger equation. Moreover, the similarity to classical physics is more manifest: the Hamiltonian equations of motion for classical mechanics are recovered by replacing the commutator above by the Poisson bracket (see also below). By the Stone–von Neumann theorem , the Heisenberg picture and the Schrödinger picture must be unitarily equivalent, as detailed below.
Matrix mechanics rapidly developed into modern quantum mechanics, and gave interesting physical results on the spectra of atoms.
Jordan noted that the commutation relations ensure that P acts as a differential operator .
The operator identity [ a , b c ] = a b c − b c a = a b c − b a c + b a c − b c a = [ a , b ] c + b [ a , c ] {\displaystyle [a,bc]=abc-bca=abc-bac+bac-bca=[a,b]c+b[a,c]} allows the evaluation of the commutator of P with any power of X , and it implies that [ P , X n ] = − i n X n − 1 {\displaystyle \left[P,X^{n}\right]=-in~X^{n-1}} which, together with linearity, implies that a P -commutator effectively differentiates any analytic matrix function of X .
Assuming limits are defined sensibly, this extends to arbitrary functions−but the extension need not be made explicit until a certain degree of mathematical rigor is required,
[ P , f ( X ) ] = − i f ′ ( X ) . {\displaystyle [P,f(X)]=-if'(X)\,.}
Since X is a Hermitian matrix, it should be diagonalizable, and it will be clear from the eventual form of P that every real number can be an eigenvalue. This makes some of the mathematics subtle, since there is a separate eigenvector for every point in space.
In the basis where X is diagonal, an arbitrary state can be written as a superposition of states with eigenvalues x , | ψ ⟩ = ∫ x ψ ( x ) | x ⟩ , {\displaystyle |\psi \rangle =\int _{x}\psi (x)|x\rangle \,,} so that ψ ( x ) = ⟨ x | ψ ⟩ , and the operator X multiplies each eigenvector by x , X | ψ ⟩ = ∫ x x ψ ( x ) | x ⟩ . {\displaystyle X|\psi \rangle =\int _{x}x\psi (x)|x\rangle ~.}
Define a linear operator D which differentiates ψ , D ∫ x ψ ( x ) | x ⟩ = ∫ x ψ ′ ( x ) | x ⟩ , {\displaystyle D\int _{x}\psi (x)|x\rangle =\int _{x}\psi '(x)|x\rangle \,,} and note that ( D X − X D ) | ψ ⟩ = ∫ x [ ( x ψ ( x ) ) ′ − x ψ ′ ( x ) ] | x ⟩ = ∫ x ψ ( x ) | x ⟩ = | ψ ⟩ , {\displaystyle (DX-XD)|\psi \rangle =\int _{x}\left[\left(x\psi (x)\right)'-x\psi '(x)\right]|x\rangle =\int _{x}\psi (x)|x\rangle =|\psi \rangle \,,} so that the operator − iD obeys the same commutation relation as P . Thus, the difference between P and − iD must commute with X , [ P + i D , X ] = 0 , {\displaystyle [P+iD,X]=0\,,} so it may be simultaneously diagonalized with X : its value acting on any eigenstate of X is some function f of the eigenvalue x .
This function must be real, because both P and − iD are Hermitian, ( P + i D ) | x ⟩ = f ( x ) | x ⟩ , {\displaystyle (P+iD)|x\rangle =f(x)|x\rangle \,,} rotating each state |x⟩ by a phase f ( x ) , that is, redefining the phase of the wavefunction: ψ ( x ) → e − i f ( x ) ψ ( x ) . {\displaystyle \psi (x)\rightarrow e^{-if(x)}\psi (x)\,.} The operator iD is redefined by an amount: i D → i D + f ( X ) , {\displaystyle iD\rightarrow iD+f(X)\,,} which means that, in the rotated basis, P is equal to − iD .
Hence, there is always a basis for the eigenvalues of X where the action of P on any wavefunction is known: P ∫ x ψ ( x ) | x ⟩ = ∫ x − i ψ ′ ( x ) | x ⟩ , {\displaystyle P\int _{x}\psi (x)|x\rangle =\int _{x}-i\psi '(x)|x\rangle \,,} and the Hamiltonian in this basis is a linear differential operator on the state-vector components, [ P 2 2 m + V ( X ) ] ∫ x ψ x | x ⟩ = ∫ x [ − 1 2 m ∂ 2 ∂ x 2 + V ( x ) ] ψ x | x ⟩ {\displaystyle \left[{\frac {P^{2}}{2m}}+V(X)\right]\int _{x}\psi _{x}|x\rangle =\int _{x}\left[-{\frac {1}{2m}}{\frac {\partial ^{2}}{\partial x^{2}}}+V(x)\right]\psi _{x}|x\rangle }
Thus, the equation of motion for the state vector is but a celebrated differential equation,
i ∂ ∂ t ψ t ( x ) = [ − 1 2 m ∂ 2 ∂ x 2 + V ( x ) ] ψ t ( x ) . {\displaystyle i{\frac {\partial }{\partial t}}\psi _{t}(x)=\left[-{\frac {1}{2m}}{\frac {\partial ^{2}}{\partial x^{2}}}+V(x)\right]\psi _{t}(x)\,.}
Since D is a differential operator, in order for it to be sensibly defined, there must be eigenvalues of X which neighbors every given value. This suggests that the only possibility is that the space of all eigenvalues of X is all real numbers, and that P is iD , up to a phase rotation .
To make this rigorous requires a sensible discussion of the limiting space of functions, and in this space this is the Stone–von Neumann theorem : any operators X and P which obey the commutation relations can be made to act on a space of wavefunctions, with P a derivative operator. This implies that a Schrödinger picture is always available.
Matrix mechanics easily extends to many degrees of freedom in a natural way. Each degree of freedom has a separate X operator and a separate effective differential operator P , and the wavefunction is a function of all the possible eigenvalues of the independent commuting X variables. [ X i , X j ] = 0 [ P i , P j ] = 0 [ X i , P j ] = i δ i j . {\displaystyle {\begin{aligned}\left[X_{i},X_{j}\right]&=0\\[1ex]\left[P_{i},P_{j}\right]&=0\\[1ex]\left[X_{i},P_{j}\right]&=i\delta _{ij}\,.\end{aligned}}}
In particular, this means that a system of N interacting particles in 3 dimensions is described by one vector whose components in a basis where all the X are diagonal is a mathematical function of 3 N -dimensional space describing all their possible positions , effectively a much bigger collection of values than the mere collection of N three-dimensional wavefunctions in one physical space. Schrödinger came to the same conclusion independently, and eventually proved the equivalence of his own formalism to Heisenberg's.
Since the wavefunction is a property of the whole system, not of any one part, the description in quantum mechanics is not entirely local. The description of several quantum particles has them correlated, or entangled . This entanglement leads to strange correlations between distant particles which violate the classical Bell's inequality .
Even if the particles can only be in just two positions, the wavefunction for N particles requires 2 N complex numbers, one for each total configuration of positions. This is exponentially many numbers in N , so simulating quantum mechanics on a computer requires exponential resources. Conversely, this suggests that it might be possible to find quantum systems of size N which physically compute the answers to problems which classically require 2 N bits to solve. This is the aspiration behind quantum computing .
For the time-independent operators X and P , ∂ A / ∂ t = 0 so the Heisenberg equation above reduces to: [ 30 ] i ℏ d A d t = [ A , H ] = A H − H A , {\displaystyle i\hbar {\frac {dA}{dt}}=[A,H]=AH-HA,} where the square brackets [ , ] denote the commutator. For a Hamiltonian which is p 2 / 2 m + V ( x ) , the X and P operators satisfy: d X d t = P m , d P d t = − ∇ V , {\displaystyle {\frac {dX}{dt}}={\frac {P}{m}},\quad {\frac {dP}{dt}}=-\nabla V,} where the first is classically the velocity , and second is classically the force , or potential gradient . These reproduce Hamilton's form of Newton's laws of motion . In the Heisenberg picture, the X and P operators satisfy the classical equations of motion. You can take the expectation value of both sides of the equation to see that, in any state | ψ ⟩ : d d t ⟨ X ⟩ = d d t ⟨ ψ | X | ψ ⟩ = 1 m ⟨ ψ | P | ψ ⟩ = 1 m ⟨ P ⟩ d d t ⟨ P ⟩ = d d t ⟨ ψ | P | ψ ⟩ = ⟨ ψ | ( − ∇ V ) | ψ ⟩ = − ⟨ ∇ V ⟩ . {\displaystyle {\begin{aligned}{\frac {d}{dt}}\langle X\rangle &={\frac {d}{dt}}\langle \psi |X|\psi \rangle ={\frac {1}{m}}\langle \psi |P|\psi \rangle ={\frac {1}{m}}\langle P\rangle \\[1.5ex]{\frac {d}{dt}}\langle P\rangle &={\frac {d}{dt}}\langle \psi |P|\psi \rangle =\langle \psi |(-\nabla V)|\psi \rangle =-\langle \nabla V\rangle \,.\end{aligned}}}
So Newton's laws are exactly obeyed by the expected values of the operators in any given state. This is Ehrenfest's theorem , which is an obvious corollary of the Heisenberg equations of motion, but is less trivial in the Schrödinger picture, where Ehrenfest discovered it.
In classical mechanics, a canonical transformation of phase space coordinates is one which preserves the structure of the Poisson brackets. The new variables x ′ , p ′ have the same Poisson brackets with each other as the original variables x , p . Time evolution is a canonical transformation, since the phase space at any time is just as good a choice of variables as the phase space at any other time.
The Hamiltonian flow is the canonical transformation : x → x + d x = x + ∂ H ∂ p d t p → p + d p = p − ∂ H ∂ x d t . {\displaystyle {\begin{aligned}x&\rightarrow x+dx=x+{\frac {\partial H}{\partial p}}dt\\[1ex]p&\rightarrow p+dp=p-{\frac {\partial H}{\partial x}}dt~.\end{aligned}}}
Since the Hamiltonian can be an arbitrary function of x and p , there are such infinitesimal canonical transformations corresponding to every classical quantity G , where G serves as the Hamiltonian to generate a flow of points in phase space for an increment of time s , d x = ∂ G ∂ p d s = { G , X } d s d p = − ∂ G ∂ x d s = { G , P } d s . {\displaystyle {\begin{aligned}dx&={\frac {\partial G}{\partial p}}ds=\left\{G,X\right\}ds\\[1ex]dp&=-{\frac {\partial G}{\partial x}}ds=\left\{G,P\right\}ds\,.\end{aligned}}}
For a general function A ( x , p ) on phase space, its infinitesimal change at every step ds under this map is d A = ∂ A ∂ x d x + ∂ A ∂ p d p = { A , G } d s . {\displaystyle dA={\frac {\partial A}{\partial x}}dx+{\frac {\partial A}{\partial p}}dp=\{A,G\}ds\,.} The quantity G is called the infinitesimal generator of the canonical transformation.
In quantum mechanics, the quantum analog G is now a Hermitian matrix, and the equations of motion are given by commutators, d A = i [ G , A ] d s . {\displaystyle dA=i[G,A]ds\,.}
The infinitesimal canonical motions can be formally integrated, just as the Heisenberg equation of motion were integrated, A ′ = U † A U {\displaystyle A'=U^{\dagger }AU} where U = e iGs and s is an arbitrary parameter.
The definition of a quantum canonical transformation is thus an arbitrary unitary change of basis on the space of all state vectors. U is an arbitrary unitary matrix, a complex rotation in phase space, U † = U − 1 . {\displaystyle U^{\dagger }=U^{-1}\,.} These transformations leave the sum of the absolute square of the wavefunction components invariant , while they take states which are multiples of each other (including states which are imaginary multiples of each other) to states which are the same multiple of each other.
The interpretation of the matrices is that they act as generators of motions on the space of states .
For example, the motion generated by P can be found by solving the Heisenberg equation of motion using P as a Hamiltonian, d X = i [ X , P ] d s = d s d P = i [ P , P ] d s = 0 . {\displaystyle {\begin{aligned}dX&=i[X,P]ds=ds\\[1ex]dP&=i[P,P]ds=0\,.\end{aligned}}} These are translations of the matrix X by a multiple of the identity matrix, X → X + s I . {\displaystyle X\rightarrow X+sI~.} This is the interpretation of the derivative operator D : e iPs = e D , the exponential of a derivative operator is a translation (so Lagrange's shift operator ).
The X operator likewise generates translations in P . The Hamiltonian generates translations in time , the angular momentum generates rotations in physical space , and the operator X 2 + P 2 generates rotations in phase space .
When a transformation, like a rotation in physical space, commutes with the Hamiltonian, the transformation is called a symmetry (behind a degeneracy) of the Hamiltonian – the Hamiltonian expressed in terms of rotated coordinates is the same as the original Hamiltonian. This means that the change in the Hamiltonian under the infinitesimal symmetry generator L vanishes, d H d s = i [ L , H ] = 0 . {\displaystyle {\frac {dH}{ds}}=i[L,H]=0\,.}
It then follows that the change in the generator under time translation also vanishes, d L d t = i [ H , L ] = 0 {\displaystyle {\frac {dL}{dt}}=i[H,L]=0} so that the matrix L is constant in time: it is conserved.
The one-to-one association of infinitesimal symmetry generators and conservation laws was discovered by Emmy Noether for classical mechanics, where the commutators are Poisson brackets , but the quantum-mechanical reasoning is identical. In quantum mechanics, any unitary symmetry transformation yields a conservation law, since if the matrix U has the property that U − 1 H U = H {\displaystyle U^{-1}HU=H} so it follows that U H = H U {\displaystyle UH=HU} and that the time derivative of U is zero – it is conserved.
The eigenvalues of unitary matrices are pure phases, so that the value of a unitary conserved quantity is a complex number of unit magnitude, not a real number. Another way of saying this is that a unitary matrix is the exponential of i times a Hermitian matrix, so that the additive conserved real quantity, the phase, is only well-defined up to an integer multiple of 2 π . Only when the unitary symmetry matrix is part of a family that comes arbitrarily close to the identity are the conserved real quantities single-valued, and then the demand that they are conserved become a much more exacting constraint.
Symmetries which can be continuously connected to the identity are called continuous , and translations, rotations, and boosts are examples. Symmetries which cannot be continuously connected to the identity are discrete , and the operation of space-inversion, or parity , and charge conjugation are examples.
The interpretation of the matrices as generators of canonical transformations is due to Paul Dirac. [ 31 ] The correspondence between symmetries and matrices was shown by Eugene Wigner to be complete, if antiunitary matrices which describe symmetries which include time-reversal are included.
It was physically clear to Heisenberg that the absolute squares of the matrix elements of X , which are the Fourier coefficients of the oscillation, would yield the rate of emission of electromagnetic radiation.
In the classical limit of large orbits, if a charge with position X ( t ) and charge q is oscillating next to an equal and opposite charge at position 0, the instantaneous dipole moment is q X ( t ) , and the time variation of this moment translates directly into the space-time variation of the vector potential, which yields nested outgoing spherical waves.
For atoms, the wavelength of the emitted light is about 10,000 times the atomic radius, and the dipole moment is the only contribution to the radiative field, while all other details of the atomic charge distribution can be ignored.
Ignoring back-reaction, the power radiated in each outgoing mode is a sum of separate contributions from the square of each independent time Fourier mode of d , P ( ω ) = 2 3 ω 4 | d i | 2 . {\displaystyle P(\omega )={\tfrac {2}{3}}{\omega ^{4}}|d_{i}|^{2}~.}
Now, in Heisenberg's representation, the Fourier coefficients of the dipole moment are the matrix elements of X . This correspondence allowed Heisenberg to provide the rule for the transition intensities, the fraction of the time that, starting from an initial state i , a photon is emitted and the atom jumps to a final state j , P i j = 2 3 ( E i − E j ) 4 | X i j | 2 . {\displaystyle P_{ij}={\tfrac {2}{3}}\left(E_{i}-E_{j}\right)^{4}\left|X_{ij}\right|^{2}\,.}
This then allowed the magnitude of the matrix elements to be interpreted statistically: they give the intensity of the spectral lines, the probability for quantum jumps from the emission of dipole radiation .
Since the transition rates are given by the matrix elements of X , wherever X ij is zero, the corresponding transition should be absent. These were called the selection rules , which were a puzzle until the advent of matrix mechanics.
An arbitrary state of the hydrogen atom, ignoring spin, is labelled by | n ; l , m ⟩ , where the value of l is a measure of the total orbital angular momentum and m is its z -component, which defines the orbit orientation. The components of the angular momentum pseudovector are L i = ε i j k X j P k {\displaystyle L_{i}=\varepsilon _{ijk}X^{j}P^{k}} where the products in this expression are independent of order and real, because different components of X and P commute.
The commutation relations of L with all three coordinate matrices X , Y , Z (or with any vector) are easy to find, [ L i , X j ] = i ε i j k X k , {\displaystyle \left[L_{i},X_{j}\right]=i\varepsilon _{ijk}X_{k}\,,} which confirms that the operator L generates rotations between the three components of the vector of coordinate matrices X .
From this, the commutator of L z and the coordinate matrices X , Y , Z can be read off, [ L z , X ] = i Y , [ L z , Y ] = − i X . {\displaystyle {\begin{aligned}\left[L_{z},X\right]&=iY\,,\\[1ex]\left[L_{z},Y\right]&=-iX\,.\end{aligned}}}
This means that the quantities X + iY and X − iY have a simple commutation rule, [ L z , X + i Y ] = ( X + i Y ) , [ L z , X − i Y ] = − ( X − i Y ) . {\displaystyle {\begin{aligned}\left[L_{z},X+iY\right]&=(X+iY)\,,\\[1ex]\left[L_{z},X-iY\right]&=-(X-iY)\,.\end{aligned}}}
Just like the matrix elements of X + iP and X − iP for the harmonic oscillator Hamiltonian, this commutation law implies that these operators only have certain off diagonal matrix elements in states of definite m , L z ( ( X + i Y ) | m ⟩ ) = ( X + i Y ) L z | m ⟩ + ( X + i Y ) | m ⟩ = ( m + 1 ) ( X + i Y ) | m ⟩ {\displaystyle L_{z}{\bigl (}(X+iY)|m\rangle {\bigr )}=(X+iY)L_{z}|m\rangle +(X+iY)|m\rangle =(m+1)(X+iY)|m\rangle } meaning that the matrix ( X + iY ) takes an eigenvector of L z with eigenvalue m to an eigenvector with eigenvalue m + 1 . Similarly, ( X − iY ) decrease m by one unit, while Z does not change the value of m .
So, in a basis of | l , m ⟩ states where L 2 and L z have definite values, the matrix elements of any of the three components of the position are zero, except when m is the same or changes by one unit.
This places a constraint on the change in total angular momentum. Any state can be rotated so that its angular momentum is in the z -direction as much as possible, where m = l . The matrix element of the position acting on | l , m ⟩ can only produce values of m which are bigger by one unit, so that if the coordinates are rotated so that the final state is | l ′, l ′⟩ , the value of l ′ can be at most one bigger than the biggest value of l that occurs in the initial state. So l ′ is at most l + 1 .
The matrix elements vanish for l ′ > l + 1 , and the reverse matrix element is determined by Hermiticity, so these vanish also when l ′ < l − 1 : Dipole transitions are forbidden with a change in angular momentum of more than one unit.
The Heisenberg equation of motion determines the matrix elements of P in the Heisenberg basis from the matrix elements of X . P i j = m d d t X i j = i m ( E i − E j ) X i j , {\displaystyle P_{ij}=m{\frac {d}{dt}}X_{ij}=im\left(E_{i}-E_{j}\right)X_{ij}\,,} which turns the diagonal part of the commutation relation into a sum rule for the magnitude of the matrix elements: ∑ j P i j x j i − X i j p j i = i ∑ j 2 m ( E i − E j ) | X i j | 2 = i . {\displaystyle \sum _{j}P_{ij}x_{ji}-X_{ij}p_{ji}=i\sum _{j}2m\left(E_{i}-E_{j}\right)\left|X_{ij}\right|^{2}=i\,.}
This yields a relation for the sum of the spectroscopic intensities to and from any given state, although to be absolutely correct, contributions from the radiative capture probability for unbound scattering states must be included in the sum: ∑ j 2 m ( E i − E j ) | X i j | 2 = 1 . {\displaystyle \sum _{j}2m\left(E_{i}-E_{j}\right)\left|X_{ij}\right|^{2}=1\,.} | https://en.wikipedia.org/wiki/Matrix_mechanics |
A matrix metalloproteinase inhibitor ( INN stem –mastat [ 1 ] ) inhibits matrix metalloproteinases . Because they inhibit cell migration , they have antiangiogenic effects. They are endogenous or exogenous.
The most notorious endogenous metalloproteinases are tissue inhibitors of metalloproteinases , followed by cartilage-derived angiogenesis inhibitors .
Exogenous matrix metalloproteinase inhibitors were developed as anticancer drugs. [ 2 ] Examples include:
Metalloproteinase inhibitors are found in numerous marine organisms, including fish, cephalopods, mollusks, algae and bacteria. [ 3 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Matrix_metalloproteinase_inhibitor |
In linear algebra , a matrix pencil is a matrix -valued polynomial function defined on a field K {\displaystyle K} , usually the real or complex numbers .
Let K {\displaystyle K} be a field (typically, K ∈ { R , C } {\displaystyle K\in \{\mathbb {R} ,\mathbb {C} \}} ; the definition can be generalized to rngs ), let ℓ ≥ 0 {\displaystyle \ell \geq 0} be a non-negative integer, let n > 0 {\displaystyle n>0} be a positive integer, and let A 0 , A 1 , … , A ℓ {\displaystyle A_{0},A_{1},\dots ,A_{\ell }} be n × n {\displaystyle n\times n} matrices (i. e. A i ∈ M a t ( K , n × n ) {\displaystyle A_{i}\in \mathrm {Mat} (K,n\times n)} for all i = 0 , … , ℓ {\displaystyle i=0,\dots ,\ell } ). Then the matrix pencil defined by A 0 , … , A ℓ {\displaystyle A_{0},\dots ,A_{\ell }} is the matrix-valued function L : K → M a t ( K , n × n ) {\displaystyle L\colon K\to \mathrm {Mat} (K,n\times n)} defined by
The degree of the matrix pencil is defined as the largest integer 0 ≤ k ≤ ℓ {\displaystyle 0\leq k\leq \ell } such that A k ≠ 0 {\displaystyle A_{k}\neq 0} , the n × n {\displaystyle n\times n} zero matrix over K {\displaystyle K} .
A particular case is a linear matrix pencil L ( λ ) = A − λ B {\displaystyle L(\lambda )=A-\lambda B} (where B ≠ 0 {\displaystyle B\neq 0} ). [ 1 ] We denote it briefly with the notation ( A , B ) {\displaystyle (A,B)} , and note that using the more general notation, A 0 = A {\displaystyle A_{0}=A} and A 1 = − B {\displaystyle A_{1}=-B} (not B {\displaystyle B} ).
A pencil is called regular if there is at least one value of λ {\displaystyle \lambda } such that det ( L ( λ ) ) ≠ 0 {\displaystyle \det(L(\lambda ))\neq 0} ; otherwise it is called singular. We call eigenvalues of a matrix pencil all (complex) numbers λ {\displaystyle \lambda } for which det ( L ( λ ) ) = 0 {\displaystyle \det(L(\lambda ))=0} ; in particular, the eigenvalues of the matrix pencil ( A , I ) {\displaystyle (A,I)} are the matrix eigenvalues of A {\displaystyle A} . For linear pencils in particular, the eigenvalues of the pencil are also called generalized eigenvalues.
The set of the eigenvalues of a pencil is called the spectrum of the pencil, and is written σ ( A 0 , … , A ℓ ) {\displaystyle \sigma (A_{0},\dots ,A_{\ell })} . For the linear pencil ( A , B ) {\displaystyle (A,B)} , it is written as σ ( A , B ) {\displaystyle \sigma (A,B)} (not σ ( A , − B ) {\displaystyle \sigma (A,-B)} ).
The linear pencil ( A , B ) {\displaystyle (A,B)} is said to have one or more eigenvalues at infinity if B {\displaystyle B} has one or more 0 eigenvalues.
Matrix pencils play an important role in numerical linear algebra . The problem of finding the eigenvalues of a pencil is called the generalized eigenvalue problem . The most popular algorithm for this task is the QZ algorithm , which is an implicit version of the QR algorithm to solve the eigenvalue problem A x = λ B x {\displaystyle Ax=\lambda Bx} without inverting the matrix B {\displaystyle B} (which is impossible when B {\displaystyle B} is singular, or numerically unstable when it is ill-conditioned ).
If A B = B A {\displaystyle AB=BA} , then the pencil generated by A {\displaystyle A} and B {\displaystyle B} : [ 2 ]
This linear algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Matrix_pencil |
In mathematics, a matrix polynomial is a polynomial with square matrices as variables. Given an ordinary, scalar-valued polynomial
this polynomial evaluated at a matrix A {\displaystyle A} is
where I {\displaystyle I} is the identity matrix . [ 1 ]
Note that P ( A ) {\displaystyle P(A)} has the same dimension as A {\displaystyle A} .
A matrix polynomial equation is an equality between two matrix polynomials, which holds for the specific matrices in question. A matrix polynomial identity is a matrix polynomial equation which holds for all matrices A in a specified matrix ring M n ( R ).
Matrix polynomials are often demonstrated in undergraduate linear algebra classes due to their relevance in showcasing properties of linear transformations represented as matrices, most notably the Cayley–Hamilton theorem .
The characteristic polynomial of a matrix A is a scalar-valued polynomial, defined by p A ( t ) = det ( t I − A ) {\displaystyle p_{A}(t)=\det \left(tI-A\right)} . The Cayley–Hamilton theorem states that if this polynomial is viewed as a matrix polynomial and evaluated at the matrix A {\displaystyle A} itself, the result is the zero matrix: p A ( A ) = 0 {\displaystyle p_{A}(A)=0} . An polynomial annihilates A {\displaystyle A} if p ( A ) = 0 {\displaystyle p(A)=0} ; p {\displaystyle p} is also known as an annihilating polynomial . Thus, the characteristic polynomial is a polynomial which annihilates A {\displaystyle A} .
There is a unique monic polynomial of minimal degree which annihilates A {\displaystyle A} ; this polynomial is the minimal polynomial . Any polynomial which annihilates A {\displaystyle A} (such as the characteristic polynomial) is a multiple of the minimal polynomial. [ 2 ]
It follows that given two polynomials P {\displaystyle P} and Q {\displaystyle Q} , we have P ( A ) = Q ( A ) {\displaystyle P(A)=Q(A)} if and only if
where P ( j ) {\displaystyle P^{(j)}} denotes the j {\displaystyle j} th derivative of P {\displaystyle P} and λ 1 , … , λ s {\displaystyle \lambda _{1},\dots ,\lambda _{s}} are the eigenvalues of A {\displaystyle A} with corresponding indices n 1 , … , n s {\displaystyle n_{1},\dots ,n_{s}} (the index of an eigenvalue is the size of its largest Jordan block ). [ 3 ]
Matrix polynomials can be used to sum a matrix geometrical series as one would an ordinary geometric series ,
If I − A {\displaystyle I-A} is nonsingular one can evaluate the expression for the sum S {\displaystyle S} . | https://en.wikipedia.org/wiki/Matrix_polynomial |
In mathematics , the matrix sign function is a matrix function on square matrices analogous to the complex sign function . [ 1 ]
It was introduced by J.D. Roberts in 1971 as a tool for model reduction and for solving Lyapunov and Algebraic Riccati equation in a technical report of Cambridge University , which was later published in a journal in 1980. [ 2 ] [ 3 ]
The matrix sign function is a generalization of the complex signum function
csgn ( z ) = { 1 if R e ( z ) > 0 , − 1 if R e ( z ) < 0 , {\displaystyle \operatorname {csgn} (z)={\begin{cases}1&{\text{if }}\mathrm {Re} (z)>0,\\-1&{\text{if }}\mathrm {Re} (z)<0,\end{cases}}}
to the matrix valued analogue csgn ( A ) {\displaystyle \operatorname {csgn} (A)} . Although the sign function is not analytic , the matrix function is well defined for all matrices that have no eigenvalue on the imaginary axis , see for example the Jordan-form-based definition (where the derivatives are all zero).
Theorem: Let A ∈ C n × n {\displaystyle A\in \mathbb {C} ^{n\times n}} , then csgn ( A ) 2 = I {\displaystyle \operatorname {csgn} (A)^{2}=I} . [ 1 ]
Theorem: Let A ∈ C n × n {\displaystyle A\in \mathbb {C} ^{n\times n}} , then csgn ( A ) {\displaystyle \operatorname {csgn} (A)} is diagonalizable and has eigenvalues that are ± 1 {\displaystyle \pm 1} . [ 1 ]
Theorem: Let A ∈ C n × n {\displaystyle A\in \mathbb {C} ^{n\times n}} , then ( I + csgn ( A ) ) / 2 {\displaystyle (I+\operatorname {csgn} (A))/2} is a projector onto the invariant subspace associated with the eigenvalues in the right-half plane , and analogously for ( I − csgn ( A ) ) / 2 {\displaystyle (I-\operatorname {csgn} (A))/2} and the left-half plane . [ 1 ]
Theorem: Let A ∈ C n × n {\displaystyle A\in \mathbb {C} ^{n\times n}} , and A = P [ J + 0 0 J − ] P − 1 {\displaystyle A=P{\begin{bmatrix}J_{+}&0\\0&J_{-}\end{bmatrix}}P^{-1}} be a Jordan decomposition such that J + {\displaystyle J_{+}} corresponds to eigenvalues with positive real part and J − {\displaystyle J_{-}} to eigenvalue with negative real part. Then csgn ( A ) = P [ I + 0 0 − I − ] P − 1 {\displaystyle \operatorname {csgn} (A)=P{\begin{bmatrix}I_{+}&0\\0&-I_{-}\end{bmatrix}}P^{-1}} , where I + {\displaystyle I_{+}} and I − {\displaystyle I_{-}} are identity matrices of sizes corresponding to J + {\displaystyle J_{+}} and J − {\displaystyle J_{-}} , respectively. [ 1 ]
The function can be computed with generic methods for matrix functions , but there are also specialized methods.
The Newton iteration can be derived by observing that csgn ( x ) = x 2 / x {\displaystyle \operatorname {csgn} (x)={\sqrt {x^{2}}}/x} , which in terms of matrices can be written as csgn ( A ) = A − 1 A 2 {\displaystyle \operatorname {csgn} (A)=A^{-1}{\sqrt {A^{2}}}} , where we use the matrix square root . If we apply the Babylonian method to compute the square root of the matrix A 2 {\displaystyle A^{2}} , that is, the iteration X k + 1 = 1 2 ( X k + A X k − 1 ) {\textstyle X_{k+1}={\frac {1}{2}}\left(X_{k}+AX_{k}^{-1}\right)} , and define the new iterate Z k = A − 1 X k {\displaystyle Z_{k}=A^{-1}X_{k}} , we arrive at the iteration
Z k + 1 = 1 2 ( Z k + Z k − 1 ) {\displaystyle Z_{k+1}={\frac {1}{2}}\left(Z_{k}+Z_{k}^{-1}\right)} ,
where typically Z 0 = A {\displaystyle Z_{0}=A} . Convergence is global, and locally it is quadratic. [ 1 ] [ 2 ]
The Newton iteration uses the explicit inverse of the iterates Z k {\displaystyle Z_{k}} .
To avoid the need of an explicit inverse used in the Newton iteration, the inverse can be approximated with one step of the Newton iteration for the inverse , Z k − 1 ≈ Z k ( 2 I − Z k 2 ) {\displaystyle Z_{k}^{-1}\approx Z_{k}\left(2I-Z_{k}^{2}\right)} , derived by Schulz ( de ) in 1933. [ 4 ] Substituting this approximation into the previous method, the new method becomes
Z k + 1 = 1 2 Z k ( 3 I − Z k 2 ) {\displaystyle Z_{k+1}={\frac {1}{2}}Z_{k}\left(3I-Z_{k}^{2}\right)} .
Convergence is (still) quadratic, but only local (guaranteed for ‖ I − A 2 ‖ < 1 {\displaystyle \|I-A^{2}\|<1} ). [ 1 ]
Theorem: [ 2 ] [ 3 ] Let A , B , C ∈ R n × n {\displaystyle A,B,C\in \mathbb {R} ^{n\times n}} and assume that A {\displaystyle A} and B {\displaystyle B} are stable , then the unique solution to the Sylvester equation , A X + X B = C {\displaystyle AX+XB=C} , is given by X {\displaystyle X} such that
[ − I 2 X 0 I ] = csgn ( [ A − C 0 − B ] ) . {\displaystyle {\begin{bmatrix}-I&2X\\0&I\end{bmatrix}}=\operatorname {csgn} \left({\begin{bmatrix}A&-C\\0&-B\end{bmatrix}}\right).}
Proof sketch: The result follows from the similarity transform
[ A − C 0 − B ] = [ I X 0 I ] [ A 0 0 − B ] [ I X 0 I ] − 1 , {\displaystyle {\begin{bmatrix}A&-C\\0&-B\end{bmatrix}}={\begin{bmatrix}I&X\\0&I\end{bmatrix}}{\begin{bmatrix}A&0\\0&-B\end{bmatrix}}{\begin{bmatrix}I&X\\0&I\end{bmatrix}}^{-1},}
since
csgn ( [ A − C 0 − B ] ) = [ I X 0 I ] [ I 0 0 − I ] [ I − X 0 I ] , {\displaystyle \operatorname {csgn} \left({\begin{bmatrix}A&-C\\0&-B\end{bmatrix}}\right)={\begin{bmatrix}I&X\\0&I\end{bmatrix}}{\begin{bmatrix}I&0\\0&-I\end{bmatrix}}{\begin{bmatrix}I&-X\\0&I\end{bmatrix}},}
due to the stability of A {\displaystyle A} and B {\displaystyle B} .
The theorem is, naturally, also applicable to the Lyapunov equation . However, due to the structure the Newton iteration simplifies to only involving inverses of A {\displaystyle A} and A T {\displaystyle A^{T}} .
There is a similar result applicable to the algebraic Riccati equation , A H P + P A − P F P + Q = 0 {\displaystyle A^{H}P+PA-PFP+Q=0} . [ 1 ] [ 2 ] Define V , W ∈ C 2 n × n {\displaystyle V,W\in \mathbb {C} ^{2n\times n}} as
[ V W ] = csgn ( [ A H Q F − A ] ) − [ I 0 0 I ] . {\displaystyle {\begin{bmatrix}V&W\end{bmatrix}}=\operatorname {csgn} \left({\begin{bmatrix}A^{H}&Q\\F&-A\end{bmatrix}}\right)-{\begin{bmatrix}I&0\\0&I\end{bmatrix}}.}
Under the assumption that F , Q ∈ C n × n {\displaystyle F,Q\in \mathbb {C} ^{n\times n}} are Hermitian and there exists a unique stabilizing solution, in the sense that A − F P {\displaystyle A-FP} is stable , that solution is given by the over-determined , but consistent , linear system
V P = − W . {\displaystyle VP=-W.}
Proof sketch: The similarity transform
[ A H Q F − A ] = [ P − I I 0 ] [ − ( A − F P ) − F 0 ( A − F P ) ] [ P − I I 0 ] − 1 , {\displaystyle {\begin{bmatrix}A^{H}&Q\\F&-A\end{bmatrix}}={\begin{bmatrix}P&-I\\I&0\end{bmatrix}}{\begin{bmatrix}-(A-FP)&-F\\0&(A-FP)\end{bmatrix}}{\begin{bmatrix}P&-I\\I&0\end{bmatrix}}^{-1},}
and the stability of A − F P {\displaystyle A-FP} implies that
( csgn ( [ A H Q F − A ] ) − [ I 0 0 I ] ) [ X − I I 0 ] = [ X − I I 0 ] [ 0 Y 0 − 2 I ] , {\displaystyle \left(\operatorname {csgn} \left({\begin{bmatrix}A^{H}&Q\\F&-A\end{bmatrix}}\right)-{\begin{bmatrix}I&0\\0&I\end{bmatrix}}\right){\begin{bmatrix}X&-I\\I&0\end{bmatrix}}={\begin{bmatrix}X&-I\\I&0\end{bmatrix}}{\begin{bmatrix}0&Y\\0&-2I\end{bmatrix}},}
for some matrix Y ∈ C n × n {\displaystyle Y\in \mathbb {C} ^{n\times n}} .
The Denman–Beavers iteration for the square root of a matrix can be derived from the Newton iteration for the matrix sign function by noticing that A − P I P = 0 {\displaystyle A-PIP=0} is a degenerate algebraic Riccati equation [ 3 ] and by definition a solution P {\displaystyle P} is the square root of A {\displaystyle A} . | https://en.wikipedia.org/wiki/Matrix_sign_function |
Matrixx Initiatives, Inc. v. Siracusano , 563 U.S. 27 (2011), is a decision by the Supreme Court of the United States regarding whether a plaintiff can state a claim for securities fraud under §10(b) of the Securities Exchange Act of 1934 , as amended, 15 U.S.C. §78j(b) , and Securities and Exchange Commission Rule 10b-5, 17 CFR §240.10b-5 (2010), based on a pharmaceutical company's failure to disclose reports of adverse events associated with a product if the reports do not find statistically significant evidence that the adverse effects may be caused by the use of the product. In a 9–0 opinion delivered by Justice Sonia Sotomayor , the Court affirmed the Court of Appeals for the Ninth Circuit 's ruling that the respondents, plaintiffs in a securities fraud class action against Matrixx Initiatives, Inc., and three Matrixx executives, had stated a claim under §10(b) and Rule 10b-5. [ 1 ]
Petitioner Matrixx Initiatives, Inc., is a pharmaceutical company that sells cold remedy products through its wholly owned subsidiary Zicam, LLC. One of Zicam's main products is Zicam Cold Remedy (Zicam), which is produced in the form of a nasal spray or gel containing the active ingredient zinc gluconate . On April 27, 2004, respondents brought a class action suit against petitioners, alleging that petitioners violated §10(b) of the Securities Exchange Act and SEC Rule 10b-5 by failing to disclose reports that Zicam could cause anosmia , or loss of the sense of smell. Petitioners filed a motion to dismiss respondents' complaint for failure to state a claim. The District Court for the District of Arizona granted the motion without prejudice , [ 2 ] reasoning that the allegation of user complaints were neither material nor statistically significant, and that respondents failed to allege scienter . Respondents appealed to the Court of Appeals for the Ninth Circuit, which issued a decision on October 28, 2009, to reverse and remand the judgement of the District Court. [ 3 ] On March 23, 2010, petitioners filed their petition for a writ of certiorari to the Ninth Circuit with the United States Supreme Court. [ 4 ]
On March 22, 2011, Justice Sotomayor delivered the 9–0 opinion that held "[r]espondents have stated a claim under §10(b) and Rule 10b-5", affirming 585 F.3d 1167. [ 1 ]
An article by Carl Bialik appearing in The Wall Street Journal on April 2, 2011, reported: [ 5 ]
In [the] opinion, the justices said companies can't only rely on statistical significance when deciding what they need to disclose to investors.
Amen, say several statisticians who have long argued that the concept of statistical significance has unjustly overtaken other barometers used to determine which experimental results are valid and warrant public distribution. "Statistical significance doesn't tell you everything about the truth of the hypothesis you're exploring," says Steven Goodman, an epidemiologist and biostatistician at the Johns Hopkins Bloomberg School of Public Health .
Erik Olson, a partner at the Morrison & Foerster law firm in San Francisco which filed an amicus brief on behalf of BayBio, said that the court's ruling risks leaving companies without a clear guideline for deciding when they need to disclose adverse events. [ 5 ] Olson, Stephen Thau, and Stefan Szpajda wrote a press release stating: [ 6 ]
Life sciences companies and other public companies can learn at least two lessons from the decision. First and foremost, be careful what you say. As the Court emphasized, the securities laws focus on false or misleading speech. "[C]ompanies can control what they have to disclose under these provisions by controlling what they say to the market." (Slip Op. at 16). Rash or categorical comments are far more likely to form the basis for a lawsuit than measured, careful statements about the facts.
Second, life sciences companies should consult carefully with lawyers regarding specific disclosures and policies and practices for disclosing adverse events. | https://en.wikipedia.org/wiki/Matrixx_Initiatives,_Inc._v._Siracusano |
Matthew Nagle (October 16, 1979 – July 24, 2007) was the first person to use a brain–computer interface to restore functionality lost due to paralysis . He was a C3 tetraplegic , paralyzed from the neck down after being stabbed.
Nagle attended Weymouth High School (Class of 1998). He was an exceptional athlete and a star football player. In 2001, he sustained a stabbing injury while leaving the town's annual fireworks show near Wessagussett Beach on July 3. He was stabbed and his spinal cord severed when he stepped in to help a friend.
Nagle died on July 24, 2007, in Stoughton, Massachusetts , from sepsis . [ 1 ] [ 2 ]
Nagle agreed to participate in a clinical trial involving the BrainGate Neural Interface System (developed by Cyberkinetics ) out of a desire to again be healthy and lead a normal life, and in hopes that modern medical discoveries could help him. He also hoped that his participation in this Clinical Trial would help improve the lives of people who, like him, suffered injuries or diseases that cause severe motor disabilities.
The device was implanted on June 22, 2004, by neurosurgeon Gerhard Friehs . A 96- electrode " Utah Array " was placed on the surface of his brain over the region of motor cortex that controlled his dominant left hand and arm. A link connected it to the outside of his skull, where it could be connected to a computer. The computer was then trained to recognize Nagle's thought patterns and associate them with movements he was trying to achieve. [ 3 ]
While he was implanted, Matt could control a computer "mouse" cursor, using it then to press buttons that can control TV, check e-mail, and do basically everything that can be done by pressing buttons. He could draw (although the cursor control is not precise) on the screen. He could also send commands to an external prosthetic hand (close and open). [ 4 ] The results of the study are published in the journal Nature . [ 5 ] Per Food and Drug Administration (FDA) regulations and the study protocol, the BrainGate device was removed from him after approximately one year.
I can't put it into words. It's just—I use my brain. I just thought it. I said, "Cursor go up to the top right." And it did, and now I can control it all over the screen. It will give me a sense of independence.
On June 5, 2008, a grand jury in Norfolk County, Massachusetts , indicted on a second-degree murder charge Nagle's attacker Nicholas Cirignano. Cirignano had in 2005 been convicted of Nagle's stabbing and sentenced to nine years' imprisonment. District Attorney William Keating used the state medical examiner's ruling that the stabbing had caused Nagle's eventual death as grounds to seek the murder charge.
On April 10, 2009, a Superior Court Judge ruled that Cirignano could not be tried for murder, as the jury's verdict from the original assault case had already determined that one of the key components to a murder charge, malice, was negated by excessive force in self-defence. However, the lesser charge of manslaughter could still, in theory, be applied. [ 7 ] | https://en.wikipedia.org/wiki/Matt_Nagle |
The Mattauch isobar rule , formulated by Josef Mattauch in 1934, states that if two adjacent elements on the periodic table have isotopes of the same mass number , one of the isotopes must be radioactive . [ 1 ] [ 2 ] Two nuclides that have the same mass number ( isobars ) can both be stable only if their atomic numbers differ by more than one. In fact, for currently observationally stable nuclides, the difference can only be 2 or 4, and in theory, two nuclides that have the same mass number cannot be both stable (at least to beta decay or double beta decay ), but many such nuclides which are theoretically unstable to double beta decay have not been observed to decay, e.g. 134 Xe . [ 1 ] However, this rule cannot make predictions on the half-lives of these radioisotopes . [ 1 ]
A consequence of this rule is that technetium and promethium both have no stable isotopes, as each of the neighboring elements on the periodic table ( molybdenum and ruthenium , and neodymium and samarium , respectively) have a beta-stable isotope for each mass number for the range in which the isotopes of the unstable elements usually would be stable to beta decay . (Note that although 147 Sm is unstable, it is stable to beta decay; thus 147 is not a counterexample). [ 1 ] [ 2 ] These ranges can be calculated using the liquid drop model (for example the stability of technetium isotopes ), in which the isobar with the lowest mass excess or greatest binding energy is shown to be stable to beta decay [ 3 ] because energy conservation forbids a spontaneous transition to a less stable state. [ 4 ]
Thus no stable nuclides have proton number 43 or 61, and by the same reasoning no stable nuclides have neutron number 19, 21, 35, 39, 45, 61, 71, 89, 115, or 123.
The only known exceptions to the Mattauch isobar rule are the cases of antimony-123 and tellurium-123 and of hafnium-180 and tantalum-180m , where both nuclei are observationally stable. It is predicted that 123 Te would undergo electron capture to form 123 Sb, but this decay has not yet been observed; 180m Ta should be able to undergo isomeric transition to 180 Ta, beta decay to 180 W, electron capture to 180 Hf, or alpha decay to 176 Lu, but none of these decay modes have been observed. [ 5 ]
In addition, beta decay has been seen for neither curium-247 nor berkelium-247 , though it is expected that the former should decay into the latter. Both nuclides are alpha-unstable.
As mentioned above, the Mattauch isobar rule cannot make predictions as to the half-lives of the beta-unstable isotopes. Hence there are a few cases where isobars of adjacent elements both occur primordially, as the half-life of the unstable isobar is over a billion years. This occurs for the following mass numbers: | https://en.wikipedia.org/wiki/Mattauch_isobar_rule |
Matte is a term used in the field of pyrometallurgy given to the molten metal sulfide phases typically formed during smelting of copper , nickel , and other base metals. [ 1 ] Typically, a matte is the phase in which the principal metal being extracted is recovered prior to a final reduction process (usually converting ) to produce blister copper. [ 1 ] The matte may also collect some valuable minor constituents such as noble metals , minor base metals, selenium or tellurium . Mattes may also be used to collect impurities from a metal phase, such as in the case of antimony smelting. [ 2 ] Molten mattes are insoluble in both slag and metal phases. This insolubility, combined with differences in specific gravities between mattes, slags, and metals, allows for separation of the molten phases.
This metallurgy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Matte_(metallurgy) |
Matter is a peer-reviewed scientific journal that covers the general field of materials science . It is published by Cell Press and the editor-in-chief is Steven W. Cranford.
This article about a materials science journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Matter_(journal) |
Even restricting the discussion to physics , scientists do not have a unique definition of what matter is. In the currently known particle physics , summarised by the standard model of elementary particles and interactions, it is possible to distinguish in an absolute sense particles of matter and particles of antimatter . This is particularly easy for those particles that carry electric charge , such as electrons , protons or quarks , while the distinction is more subtle in the case of neutrinos , fundamental elementary particles that do not carry electric charge. In the standard model, it is not possible to create a net amount of matter particles—or more precisely, it is not possible to change the net number of leptons or of quarks in any perturbative reaction among particles. This remark is consistent with all existing observations.
However, similar processes are not considered to be impossible and are expected in other models of the elementary particles, that extend the standard model. They are necessary in speculative theories that aim to explain the cosmic excess of matter over antimatter , such as leptogenesis and baryogenesis . They could even manifest themselves in laboratory as proton decay or as creations of electrons in the so-called neutrinoless double beta decay. The latter case occurs if the neutrinos are Majorana particles , being at the same time matter and antimatter, according to the definition given just above. [ 1 ]
In a wider sense, one can use the word matter simply to refer to fermions . In this sense, matter and antimatter particles (such as an electron and a positron ) are identified beforehand. The process inverse to particle annihilation can be called matter creation ; more precisely, we are considering here the process obtained under time reversal of the annihilation process. This process is also known as pair production , and can be described as the conversion of light particles (i.e., photons) into one or more massive particles . [ citation needed ] The most common and well-studied case is the one where two photons convert into an electron – positron pair.
Because of momentum conservation laws, the creation of a pair of fermions (matter particles) out of a single photon cannot occur. However, matter creation is allowed by these laws when in the presence of another particle (another boson, or even a fermion) which can share the primary photon's momentum. Thus, matter can be created out of two photons.
The law of conservation of energy sets a minimum photon energy required for the creation of a pair of fermions: this threshold energy must be greater than the total rest energy of the fermions created. To create an electron-positron pair, the total energy of the photons, in the rest frame, must be at least 2 m e c 2 = 2 × 0.511 MeV = 1.022 MeV ( m e is the mass of one electron and c is the speed of light in vacuum), an energy value that corresponds to soft gamma ray photons. The creation of a much more massive pair, like a proton and antiproton , requires photons with energy of more than 1.88 GeV (hard gamma ray photons).
The first published calculations of the rate of e + –e − pair production in photon-photon collisions were done by Lev Landau in 1934. [ 2 ] It was predicted that the process of e + –e − pair creation (via collisions of photons) dominates in collision of ultra-relativistic charged particles—because those photons are radiated in narrow cones along the direction of motion of the original particle, greatly increasing photon flux.
In high-energy particle colliders , matter creation events have yielded a wide variety of exotic heavy particles precipitating out of colliding photon jets (see two-photon physics ). Currently, two-photon physics studies creation of various fermion pairs both theoretically and experimentally (using particle accelerators , air showers , radioactive isotopes , etc.).
It is possible to create all fundamental particles in the standard model , including quarks, leptons and bosons using photons of varying energies above some minimum threshold, whether directly (by pair production), or by decay of the intermediate particle (such as a W − boson decaying to form an electron and an electron-antineutrino). [ citation needed ]
As shown above, to produce ordinary baryonic matter out of a photon gas , this gas must not only have a very high photon density , but also be very hot – the energy ( temperature ) of photons must obviously exceed the rest mass energy of the given matter particle pair. The threshold temperature for production of electrons is about 10 10 K , 10 13 K for protons and neutrons , etc. According to the Big Bang theory, in the early universe , mass-less photons and massive fermions would inter-convert freely. As the photon gas expanded and cooled, some fermions would be left over (in extremely small amounts ~10 −10 ) because low energy photons could no longer break them apart. Those left-over fermions would have become the matter we see today in the universe around us. | https://en.wikipedia.org/wiki/Matter_creation |
Matter waves are a central part of the theory of quantum mechanics , being half of wave–particle duality . At all scales where measurements have been practical, matter exhibits wave -like behavior. For example, a beam of electrons can be diffracted just like a beam of light or a water wave.
The concept that matter behaves like a wave was proposed by French physicist Louis de Broglie ( / d ə ˈ b r ɔɪ / ) in 1924, and so matter waves are also known as de Broglie waves .
The de Broglie wavelength is the wavelength , λ , associated with a particle with momentum p through the Planck constant , h : λ = h p . {\displaystyle \lambda ={\frac {h}{p}}.}
Wave-like behavior of matter has been experimentally demonstrated, first for electrons in 1927 and for other elementary particles , neutral atoms and molecules in the years since.
Matter waves have more complex velocity relations than solid objects and they also differ from electromagnetic waves (light). Collective matter waves are used to model phenomena in solid state physics; standing matter waves are used in molecular chemistry.
Matter wave concepts are widely used in the study of materials where different wavelength and interaction characteristics of electrons, neutrons, and atoms are leveraged for advanced microscopy and diffraction technologies.
At the end of the 19th century, light was thought to consist of waves of electromagnetic fields which propagated according to Maxwell's equations , while matter was thought to consist of localized particles (see history of wave and particle duality ). In 1900, this division was questioned when, investigating the theory of black-body radiation , Max Planck proposed that the thermal energy of oscillating atoms is divided into discrete portions, or quanta. [ 1 ] Extending Planck's investigation in several ways, including its connection with the photoelectric effect , Albert Einstein proposed in 1905 that light is also propagated and absorbed in quanta, [ 2 ] : 87 now called photons . These quanta would have an energy given by the Planck–Einstein relation : E = h ν {\displaystyle E=h\nu } and a momentum vector p {\displaystyle \mathbf {p} } | p | = p = E c = h λ , {\displaystyle \left|\mathbf {p} \right|=p={\frac {E}{c}}={\frac {h}{\lambda }},} where ν (lowercase Greek letter nu ) and λ (lowercase Greek letter lambda ) denote the frequency and wavelength of the light, c the speed of light, and h the Planck constant . [ 3 ] In the modern convention, frequency is symbolized by f as is done in the rest of this article. Einstein's postulate was verified experimentally [ 2 ] : 89 by K. T. Compton and O. W. Richardson [ 4 ] and by A. L. Hughes [ 5 ] in 1912 then more carefully including a measurement of the Planck constant in 1916 by Robert Millikan . [ 6 ]
When I conceived the first basic ideas of wave mechanics in 1923–1924, I was guided by the aim to perform a real physical synthesis, valid for all particles, of the coexistence of the wave and of the corpuscular aspects that Einstein had introduced for photons in his theory of light quanta in 1905.
De Broglie , in his 1924 PhD thesis, [ 8 ] proposed that just as light has both wave-like and particle-like properties, electrons also have wave-like properties.
His thesis started from the hypothesis, "that to each portion of energy with a proper mass m 0 one may associate a periodic phenomenon of the frequency ν 0 , such that one finds: hν 0 = m 0 c 2 . The frequency ν 0 is to be measured, of course, in the rest frame of the energy packet. This hypothesis is the basis of our theory." [ 9 ] [ 8 ] : 8 [ 10 ] [ 11 ] [ 12 ] [ 13 ] (This frequency is also known as Compton frequency .)
To find the wavelength equivalent to a moving body, de Broglie [ 2 ] : 214 set the total energy from special relativity for that body equal to hν : E = m c 2 1 − v 2 c 2 = h ν {\displaystyle E={\frac {mc^{2}}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}=h\nu }
(Modern physics no longer uses this form of the total energy; the energy–momentum relation has proven more useful.) De Broglie identified the velocity of the particle, v , with the wave group velocity in free space: v g ≡ ∂ ω ∂ k = d ν d ( 1 / λ ) {\displaystyle v_{\text{g}}\equiv {\frac {\partial \omega }{\partial k}}={\frac {d\nu }{d(1/\lambda )}}}
(The modern definition of group velocity uses angular frequency ω and wave number k ). By applying the differentials to the energy equation and identifying the relativistic momentum : p = m v 1 − v 2 c 2 {\displaystyle p={\frac {mv}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}}
then integrating, de Broglie arrived at his formula for the relationship between the wavelength , λ , associated with an electron and the modulus of its momentum , p , through the Planck constant , h : [ 14 ] λ = h p . {\displaystyle \lambda ={\frac {h}{p}}.}
Following up on de Broglie's ideas, physicist Peter Debye made an offhand comment that if particles behaved as waves, they should satisfy some sort of wave equation. Inspired by Debye's remark, Erwin Schrödinger decided to find a proper three-dimensional wave equation for the electron. He was guided by William Rowan Hamilton 's analogy between mechanics and optics (see Hamilton's optico-mechanical analogy ), encoded in the observation that the zero-wavelength limit of optics resembles a mechanical system – the trajectories of light rays become sharp tracks that obey Fermat's principle , an analog of the principle of least action . [ 15 ]
In 1926, Schrödinger published the wave equation that now bears his name [ 16 ] – the matter wave analogue of Maxwell's equations – and used it to derive the energy spectrum of hydrogen . Frequencies of solutions of the non-relativistic Schrödinger equation differ from de Broglie waves by the Compton frequency since the energy corresponding to the rest mass of a particle is not part of the non-relativistic Schrödinger equation. The Schrödinger equation describes the time evolution of a wavefunction , a function that assigns a complex number to each point in space. Schrödinger tried to interpret the modulus squared of the wavefunction as a charge density. This approach was, however, unsuccessful. [ 17 ] [ 18 ] [ 19 ] Max Born proposed that the modulus squared of the wavefunction is instead a probability density , a successful proposal now known as the Born rule . [ 17 ]
The following year, 1927, C. G. Darwin (grandson of the famous biologist ) explored Schrödinger's equation in several idealized scenarios. [ 20 ] For an unbound electron in free space he worked out the propagation of the wave, assuming an initial Gaussian wave packet . Darwin showed that at time t {\displaystyle t} later the position x {\displaystyle x} of the packet traveling at velocity v {\displaystyle v} would be x 0 + v t ± σ 2 + ( h t / 2 π σ m ) 2 {\displaystyle x_{0}+vt\pm {\sqrt {\sigma ^{2}+(ht/2\pi \sigma m)^{2}}}} where σ {\displaystyle \sigma } is the uncertainty in the initial position. This position uncertainty creates uncertainty in velocity (the extra second term in the square root) consistent with Heisenberg 's uncertainty relation The wave packet spreads out as show in the figure.
In 1927, matter waves were first experimentally confirmed to occur in George Paget Thomson and Alexander Reid's diffraction experiment [ 21 ] and the Davisson–Germer experiment , [ 22 ] [ 23 ] both for electrons.
The de Broglie hypothesis and the existence of matter waves has been confirmed for other elementary particles, neutral atoms and even molecules have been shown to be wave-like. [ 24 ]
The first electron wave interference patterns directly demonstrating wave–particle duality used electron biprisms [ 25 ] [ 26 ] (essentially a wire placed in an electron microscope) and measured single electrons building up the diffraction pattern.
A close copy of the famous double-slit experiment [ 27 ] : 260 using electrons through physical apertures gave the movie shown. [ 28 ]
In 1927 at Bell Labs, Clinton Davisson and Lester Germer fired slow-moving electrons at a crystalline nickel target. [ 22 ] [ 23 ] The diffracted electron intensity was measured, and was determined to have a similar angular dependence to diffraction patterns predicted by Bragg for x-rays . At the same time George Paget Thomson and Alexander Reid at the University of Aberdeen were independently firing electrons at thin celluloid foils and later metal films, observing rings which can be similarly interpreted. [ 21 ] (Alexander Reid, who was Thomson's graduate student, performed the first experiments but he died soon after in a motorcycle accident [ 29 ] and is rarely mentioned.) Before the acceptance of the de Broglie hypothesis, diffraction was a property that was thought to be exhibited only by waves. Therefore, the presence of any diffraction effects by matter demonstrated the wave-like nature of matter. [ 30 ] The matter wave interpretation was placed onto a solid foundation in 1928 by Hans Bethe , [ 31 ] who solved the Schrödinger equation , [ 16 ] showing how this could explain the experimental results. His approach is similar to what is used in modern electron diffraction approaches. [ 32 ] [ 33 ]
This was a pivotal result in the development of quantum mechanics . Just as the photoelectric effect demonstrated the particle nature of light, these experiments showed the wave nature of matter.
Neutrons , produced in nuclear reactors with kinetic energy of around 1 MeV , thermalize to around 0.025 eV as they scatter from light atoms. The resulting de Broglie wavelength (around 180 pm ) matches interatomic spacing and neutrons scatter strongly from hydrogen atoms. Consequently, neutron matter waves are used in crystallography , especially for biological materials. [ 34 ] Neutrons were discovered in the early 1930s, and their diffraction was observed in 1936. [ 35 ] In 1944, Ernest O. Wollan , with a background in X-ray scattering from his PhD work [ 36 ] under Arthur Compton , recognized the potential for applying thermal neutrons from the newly operational X-10 nuclear reactor to crystallography . Joined by Clifford G. Shull , they developed [ 37 ] neutron diffraction throughout the 1940s.
In the 1970s, a neutron interferometer demonstrated the action of gravity in relation to wave–particle duality. [ 38 ] The double-slit experiment was performed using neutrons in 1988. [ 39 ]
Interference of atom matter waves was first observed by Immanuel Estermann and Otto Stern in 1930, when a Na beam was diffracted off a surface of NaCl. [ 40 ] The short de Broglie wavelength of atoms prevented progress for many years until two technological breakthroughs revived interest: microlithography allowing precise small devices and laser cooling allowing atoms to be slowed, increasing their de Broglie wavelength. [ 41 ] The double-slit experiment on atoms was performed in 1991. [ 42 ]
Advances in laser cooling allowed cooling of neutral atoms down to nanokelvin temperatures. At these temperatures, the de Broglie wavelengths come into the micrometre range. Using Bragg diffraction of atoms and a Ramsey interferometry technique, the de Broglie wavelength of cold sodium atoms was explicitly measured and found to be consistent with the temperature measured by a different method. [ 43 ]
Recent experiments confirm the relations for molecules and even macromolecules that otherwise might be supposed too large to undergo quantum mechanical effects. In 1999, a research team in Vienna demonstrated diffraction for molecules as large as fullerenes . [ 44 ] The researchers calculated a de Broglie wavelength of the most probable C 60 velocity as 2.5 pm .
More recent experiments prove the quantum nature of molecules made of 810 atoms and with a mass of 10 123 Da . [ 45 ] As of 2019, this has been pushed to molecules of 25 000 Da . [ 46 ]
In these experiments the build-up of such interference patterns could be recorded in real time and with single molecule sensitivity. [ 47 ] Large molecules are already so complex that they give experimental access to some aspects of the quantum-classical interface, i.e., to certain decoherence mechanisms. [ 48 ] [ 49 ]
Matter wave was detected in van der Waals molecules , [ 50 ] rho mesons , [ 51 ] [ 52 ] Bose-Einstein condensate . [ 53 ]
Waves have more complicated concepts for velocity than solid objects.
The simplest approach is to focus on the description in terms of plane matter waves for a free particle , that is a wave function described by ψ ( r ) = e i k ⋅ r − i ω t , {\displaystyle \psi (\mathbf {r} )=e^{i\mathbf {k} \cdot \mathbf {r} -i\omega t},} where r {\displaystyle \mathbf {r} } is a position in real space, k {\displaystyle \mathbf {k} } is the wave vector in units of inverse meters, ω is the angular frequency with units of inverse time and t {\displaystyle t} is time. (Here the physics definition for the wave vector is used, which is 2 π {\displaystyle 2\pi } times the wave vector used in crystallography , see wavevector .) The de Broglie equations relate the wavelength λ to the modulus of the momentum | p | = p {\displaystyle |\mathbf {p} |=p} , and frequency f to the total energy E of a free particle as written above: [ 54 ] λ = 2 π | k | = h p f = ω 2 π = E h {\displaystyle {\begin{aligned}&\lambda ={\frac {2\pi }{|\mathbf {k} |}}={\frac {h}{p}}\\&f={\frac {\omega }{2\pi }}={\frac {E}{h}}\end{aligned}}} where h is the Planck constant . The equations can also be written as p = ℏ k E = ℏ ω , {\displaystyle {\begin{aligned}&\mathbf {p} =\hbar \mathbf {k} \\&E=\hbar \omega ,\\\end{aligned}}} Here, ħ = h /2 π is the reduced Planck constant. The second equation is also referred to as the Planck–Einstein relation .
In the de Broglie hypothesis, the velocity of a particle equals the group velocity of the matter wave. [ 2 ] : 214 In isotropic media or a vacuum the group velocity of a wave is defined by: v g = ∂ ω ( k ) ∂ k {\displaystyle \mathbf {v_{g}} ={\frac {\partial \omega (\mathbf {k} )}{\partial \mathbf {k} }}} The relationship between the angular frequency and wavevector is called the dispersion relationship . For the non-relativistic case this is: ω ( k ) ≈ m 0 c 2 ℏ + ℏ k 2 2 m 0 . {\displaystyle \omega (\mathbf {k} )\approx {\frac {m_{0}c^{2}}{\hbar }}+{\frac {\hbar k^{2}}{2m_{0}}}\,.} where m 0 {\displaystyle m_{0}} is the rest mass. Applying the derivative gives the (non-relativistic) matter wave group velocity : v g = ℏ k m 0 {\displaystyle \mathbf {v_{g}} ={\frac {\hbar \mathbf {k} }{m_{0}}}} For comparison, the group velocity of light, with a dispersion ω ( k ) = c k {\displaystyle \omega (k)=ck} , is the speed of light c {\displaystyle c} .
As an alternative, using the relativistic dispersion relationship for matter waves ω ( k ) = k 2 c 2 + ( m 0 c 2 ℏ ) 2 . {\displaystyle \omega (\mathbf {k} )={\sqrt {k^{2}c^{2}+\left({\frac {m_{0}c^{2}}{\hbar }}\right)^{2}}}\,.} then v g = k c 2 ω {\displaystyle \mathbf {v_{g}} ={\frac {\mathbf {k} c^{2}}{\omega }}} This relativistic form relates to the phase velocity as discussed below.
For non-isotropic media we use the Energy–momentum form instead: v g = ∂ ω ∂ k = ∂ ( E / ℏ ) ∂ ( p / ℏ ) = ∂ E ∂ p = ∂ ∂ p ( p 2 c 2 + m 0 2 c 4 ) = p c 2 p 2 c 2 + m 0 2 c 4 = p c 2 E . {\displaystyle {\begin{aligned}\mathbf {v} _{\mathrm {g} }&={\frac {\partial \omega }{\partial \mathbf {k} }}={\frac {\partial (E/\hbar )}{\partial (\mathbf {p} /\hbar )}}={\frac {\partial E}{\partial \mathbf {p} }}={\frac {\partial }{\partial \mathbf {p} }}\left({\sqrt {p^{2}c^{2}+m_{0}^{2}c^{4}}}\right)\\&={\frac {\mathbf {p} c^{2}}{\sqrt {p^{2}c^{2}+m_{0}^{2}c^{4}}}}\\&={\frac {\mathbf {p} c^{2}}{E}}.\end{aligned}}}
But (see below), since the phase velocity is v p = E / p = c 2 / v {\displaystyle \mathbf {v} _{\mathrm {p} }=E/\mathbf {p} =c^{2}/\mathbf {v} } , then v g = p c 2 E = c 2 v p = v , {\displaystyle {\begin{aligned}\mathbf {v} _{\mathrm {g} }&={\frac {\mathbf {p} c^{2}}{E}}\\&={\frac {c^{2}}{\mathbf {v} _{\mathrm {p} }}}\\&=\mathbf {v} ,\end{aligned}}} where v {\displaystyle \mathbf {v} } is the velocity of the center of mass of the particle, identical to the group velocity.
The phase velocity in isotropic media is defined as: v p = ω k {\displaystyle \mathbf {v_{p}} ={\frac {\omega }{\mathbf {k} }}} Using the relativistic group velocity above: [ 2 ] : 215 v p = c 2 v g {\displaystyle \mathbf {v_{p}} ={\frac {c^{2}}{\mathbf {v_{g}} }}} This shows that v p ⋅ v g = c 2 {\displaystyle \mathbf {v_{p}} \cdot \mathbf {v_{g}} =c^{2}} as reported by R.W. Ditchburn in 1948 and J. L. Synge in 1952. Electromagnetic waves also obey v p ⋅ v g = c 2 {\displaystyle \mathbf {v_{p}} \cdot \mathbf {v_{g}} =c^{2}} , as both | v p | = c {\displaystyle |\mathbf {v_{p}} |=c} and | v g | = c {\displaystyle |\mathbf {v_{g}} |=c} . Since for matter waves, | v g | < c {\displaystyle |\mathbf {v_{g}} |<c} , it follows that | v p | > c {\displaystyle |\mathbf {v_{p}} |>c} , but only the group velocity carries information. The superluminal phase velocity therefore does not violate special relativity, as it does not carry information.
For non-isotropic media, then v p = ω k = E / ℏ p / ℏ = E p . {\displaystyle \mathbf {v} _{\mathrm {p} }={\frac {\omega }{\mathbf {k} }}={\frac {E/\hbar }{\mathbf {p} /\hbar }}={\frac {E}{\mathbf {p} }}.}
Using the relativistic relations for energy and momentum yields v p = E p = m c 2 m v = γ m 0 c 2 γ m 0 v = c 2 v . {\displaystyle \mathbf {v} _{\mathrm {p} }={\frac {E}{\mathbf {p} }}={\frac {mc^{2}}{m\mathbf {v} }}={\frac {\gamma m_{0}c^{2}}{\gamma m_{0}\mathbf {v} }}={\frac {c^{2}}{\mathbf {v} }}.} The variable v {\displaystyle \mathbf {v} } can either be interpreted as the speed of the particle or the group velocity of the corresponding matter wave—the two are the same. Since the particle speed | v | < c {\displaystyle |\mathbf {v} |<c} for any particle that has nonzero mass (according to special relativity ), the phase velocity of matter waves always exceeds c , i.e., | v p | > c , {\displaystyle |\mathbf {v} _{\mathrm {p} }|>c,} which approaches c when the particle speed is relativistic. The superluminal phase velocity does not violate special relativity, similar to the case above for non-isotropic media. See the article on Dispersion (optics) for further details.
Using two formulas from special relativity , one for the relativistic mass energy and one for the relativistic momentum E = m c 2 = γ m 0 c 2 p = m v = γ m 0 v {\displaystyle {\begin{aligned}E&=mc^{2}=\gamma m_{0}c^{2}\\[1ex]\mathbf {p} &=m\mathbf {v} =\gamma m_{0}\mathbf {v} \end{aligned}}} allows the equations for de Broglie wavelength and frequency to be written as λ = h γ m 0 v = h m 0 v 1 − v 2 c 2 f = γ m 0 c 2 h = m 0 c 2 h 1 − v 2 c 2 , {\displaystyle {\begin{aligned}&\lambda =\,\,{\frac {h}{\gamma m_{0}v}}\,=\,{\frac {h}{m_{0}v}}\,\,\,{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}\\[2.38ex]&f={\frac {\gamma \,m_{0}c^{2}}{h}}={\frac {m_{0}c^{2}}{h{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}},\end{aligned}}} where v = | v | {\displaystyle v=|\mathbf {v} |} is the velocity , γ {\displaystyle \gamma } the Lorentz factor , and c {\displaystyle c} the speed of light in vacuum. [ 55 ] [ 56 ] This shows that as the velocity of a particle approaches zero (rest) the de Broglie wavelength approaches infinity.
Using four-vectors, the de Broglie relations form a single equation: P = ℏ K , {\displaystyle \mathbf {P} =\hbar \mathbf {K} ,} which is frame -independent.
Likewise, the relation between group/particle velocity and phase velocity is given in frame-independent form by: K = ( ω 0 c 2 ) U , {\displaystyle \mathbf {K} =\left({\frac {\omega _{0}}{c^{2}}}\right)\mathbf {U} ,} where
The preceding sections refer specifically to free particles for which the wavefunctions are plane waves. There are significant numbers of other matter waves, which can be broadly split into three classes: single-particle matter waves, collective matter waves and standing waves.
The more general description of matter waves corresponding to a single particle type (e.g. a single electron or neutron only) would have a form similar to ψ ( r ) = u ( r , k ) exp ( i k ⋅ r − i E ( k ) t / ℏ ) {\displaystyle \psi (\mathbf {r} )=u(\mathbf {r} ,\mathbf {k} )\exp(i\mathbf {k} \cdot \mathbf {r} -iE(\mathbf {k} )t/\hbar )} where now there is an additional spatial term u ( r , k ) {\displaystyle u(\mathbf {r} ,\mathbf {k} )} in the front, and the energy has been written more generally as a function of the wave vector. The various terms given before still apply, although the energy is no longer always proportional to the wave vector squared. A common approach is to define an effective mass which in general is a tensor m i j ∗ {\displaystyle m_{ij}^{*}} given by m i j ∗ − 1 = 1 ℏ 2 ∂ 2 E ∂ k i ∂ k j {\displaystyle {m_{ij}^{*}}^{-1}={\frac {1}{\hbar ^{2}}}{\frac {\partial ^{2}E}{\partial k_{i}\partial k_{j}}}} so that in the simple case where all directions are the same the form is similar to that of a free wave above. E ( k ) = ℏ 2 k 2 2 m ∗ {\displaystyle E(\mathbf {k} )={\frac {\hbar ^{2}\mathbf {k} ^{2}}{2m^{*}}}} In general the group velocity would be replaced by the probability current [ 57 ] j ( r ) = ℏ 2 m i ( ψ ∗ ( r ) ∇ ψ ( r ) − ψ ( r ) ∇ ψ ∗ ( r ) ) {\displaystyle \mathbf {j} (\mathbf {r} )={\frac {\hbar }{2mi}}\left(\psi ^{*}(\mathbf {r} )\mathbf {\nabla } \psi (\mathbf {r} )-\psi (\mathbf {r} )\mathbf {\nabla } \psi ^{*}(\mathbf {r} )\right)} where ∇ {\displaystyle \nabla } is the del or gradient operator . The momentum would then be described using the kinetic momentum operator , [ 57 ] p = − i ℏ ∇ {\displaystyle \mathbf {p} =-i\hbar \nabla } The wavelength is still described as the inverse of the modulus of the wavevector, although measurement is more complex. There are many cases where this approach is used to describe single-particle matter waves:
Other classes of matter waves involve more than one particle, so are called collective waves and are often quasiparticles . Many of these occur in solids – see Ashcroft and Mermin . Examples include:
The third class are matter waves which have a wavevector, a wavelength and vary with time, but have a zero group velocity or probability flux . The simplest of these, similar to the notation above would be cos ( k ⋅ r − ω t ) {\displaystyle \cos(\mathbf {k} \cdot \mathbf {r} -\omega t)} These occur as part of the particle in a box , and other cases such as in a ring . This can, and arguably should be, extended to many other cases. For instance, in early work de Broglie used the concept that an electron matter wave must be continuous in a ring to connect to the Bohr–Sommerfeld condition in the early approaches to quantum mechanics. [ 61 ] In that sense atomic orbitals around atoms, and also molecular orbitals are electron matter waves. [ 62 ] [ 63 ] [ 64 ]
Schrödinger applied Hamilton's optico-mechanical analogy to develop his wave mechanics for subatomic particles [ 65 ] : xi Consequently, wave solutions to the Schrödinger equation share many properties with results of light wave optics . In particular, Kirchhoff's diffraction formula works well for electron optics [ 27 ] : 745 and for atomic optics . [ 66 ] The approximation works well as long as the electric fields change more slowly than the de Broglie wavelength. Macroscopic apparatus fulfill this condition; slow electrons moving in solids do not.
Beyond the equations of motion, other aspects of matter wave optics differ from the corresponding light optics cases.
Sensitivity of matter waves to environmental condition. Many examples of electromagnetic (light) diffraction occur in air under many environmental conditions. Obviously visible light interacts weakly with air molecules. By contrast, strongly interacting particles like slow electrons and molecules require vacuum: the matter wave properties rapidly fade when they are exposed to even low pressures of gas. [ 67 ] With special apparatus, high velocity electrons can be used to study liquids and gases . Neutrons, an important exception, interact primarily by collisions with nuclei, and thus travel several hundred feet in air. [ 68 ]
Dispersion. Light waves of all frequencies travel at the same speed of light while matter wave velocity varies strongly with frequency. The relationship between frequency (proportional to energy) and wavenumber or velocity (proportional to momentum) is called a dispersion relation . Light waves in a vacuum have linear dispersion relation between frequency: ω = c k {\displaystyle \omega =ck} . For matter waves the relation is non-linear: ω ( k ) ≈ m 0 c 2 ℏ + ℏ k 2 2 m 0 . {\displaystyle \omega (k)\approx {\frac {m_{0}c^{2}}{\hbar }}+{\frac {\hbar k^{2}}{2m_{0}}}\,.} This non-relativistic matter wave dispersion relation says the frequency in vacuum varies with wavenumber ( k = 1 / λ {\displaystyle k=1/\lambda } ) in two parts: a constant part due to the de Broglie frequency of the rest mass ( ℏ ω 0 = m 0 c 2 {\displaystyle \hbar \omega _{0}=m_{0}c^{2}} ) and a quadratic part due to kinetic energy. The quadratic term causes rapid spreading of wave packets of matter waves .
Coherence The visibility of diffraction features using an optical theory approach depends on the beam coherence , [ 27 ] which at the quantum level is equivalent to a density matrix approach. [ 69 ] [ 70 ] As with light, transverse coherence (across the direction of propagation) can be increased by collimation . Electron optical systems use stabilized high voltage to give a narrow energy spread in combination with collimating (parallelizing) lenses and pointed filament sources to achieve good coherence. [ 71 ] Because light at all frequencies travels the same velocity, longitudinal and temporal coherence are linked; in matter waves these are independent. For example, for atoms, velocity (energy) selection controls longitudinal coherence and pulsing or chopping controls temporal coherence. [ 66 ] : 154
Optically shaped matter waves Optical manipulation of matter plays a critical role in matter wave optics: "Light waves can act as refractive, reflective, and absorptive structures for matter waves, just as glass interacts with light waves." [ 72 ] Laser light momentum transfer can cool matter particles and alter the internal excitation state of atoms. [ 73 ]
Multi-particle experiments While single-particle free-space optical and matter wave equations are identical, multiparticle systems like coincidence experiments are not. [ 74 ]
The following subsections provide links to pages describing applications of matter waves as probes of materials or of fundamental quantum properties . In most cases these involve some method of producing travelling matter waves which initially have the simple form exp ( i k ⋅ r − i ω t ) {\displaystyle \exp(i\mathbf {k} \cdot \mathbf {r} -i\omega t)} , then using these to probe materials.
As shown in the table below, matter wave mass ranges over 6 orders of magnitude and energy over 9 orders but the wavelengths are all in the picometre range, comparable to atomic spacings. ( Atomic diameters range from 62 to 520 pm, and the typical length of a carbon–carbon single bond is 154 pm.) Reaching longer wavelengths requires special techniques like laser cooling to reach lower energies; shorter wavelengths make diffraction effects more difficult to discern. [ 41 ] Therefore, many applications focus on material structures, in parallel with applications of electromagnetic waves, especially X-rays . Unlike light, matter wave particles may have mass , electric charge , magnetic moments , and internal structure, presenting new challenges and opportunities.
Electron diffraction patterns emerge when energetic electrons reflect or penetrate ordered solids; analysis of the patterns leads to models of the atomic arrangement in the solids.
They are used for imaging from the micron to atomic scale using electron microscopes , in transmission , using scanning , and for surfaces at low energies .
The measurements of the energy they lose in electron energy loss spectroscopy provides information about the chemistry and electronic structure of materials. Beams of electrons also lead to characteristic X-rays in energy dispersive spectroscopy which can produce information about chemical content at the nanoscale.
Quantum tunneling explains how electrons escape from metals in an electrostatic field at energies less than classical predictions allow: the matter wave penetrates of the work function barrier in the metal.
Scanning tunneling microscope leverages quantum tunneling to image the top atomic layer of solid surfaces.
Electron holography , the electron matter wave analog of optical holography , probes the electric and magnetic fields in thin films.
Neutron diffraction complements x-ray diffraction through the different scattering cross sections and sensitivity to magnetism.
Small-angle neutron scattering provides way to obtain structure of disordered systems that is sensitivity to light elements, isotopes and magnetic moments.
Neutron reflectometry is a neutron diffraction technique for measuring the structure of thin films.
Atom interferometers , similar to optical interferometers , measure the difference in phase between atomic matter waves along different paths.
Atom optics mimic many light optic devices, including mirrors , atom focusing zone plates.
Scanning helium microscopy uses He atom waves to image solid structures non-destructively.
Quantum reflection uses matter wave behavior to explain grazing angle atomic reflection, the basis of some atomic mirrors .
Quantum decoherence measurements rely on Rb atom wave interference.
Quantum superposition revealed by interference of matter waves from large molecules probes the limits of wave–particle duality and quantum macroscopicity. [ 83 ] [ 84 ]
Matter-wave interfererometers generate nanostructures on molecular beams that can be read with nanometer accuracy and therefore be used for highly sensitive force measurements, from which one can deduce a plethora of properties of individualized complex molecules. [ 85 ] | https://en.wikipedia.org/wiki/Matter_wave |
A matter wave clock is a type of clock whose principle of operation makes use of the apparent wavelike properties of matter.
Matter waves were first proposed by Louis de Broglie and are sometimes called de Broglie waves. They form a key aspect of wave–particle duality and experiments have since supported the idea. The wave associated with a particle of a given mass, such as an atom , has a defined frequency , and a change in the duration of one cycle from peak to peak that is sometimes called its Compton periodicity . Such a matter wave has the characteristics of a simple clock, in that it marks out fixed and equal intervals of time. The twins paradox arising from Albert Einstein 's theory of relativity means that a moving particle will have a slightly different period from a stationary particle. Comparing two such particles allows the construction of a practical "Compton clock". [ 1 ]
De Broglie proposed that the frequency f of a matter wave equals E / h , where E is the total energy of the particle and h is the Planck constant . For a particle at rest, the relativistic equation E = mc 2 allows the derivation of the Compton frequency f for a stationary massive particle, equal to mc 2 / h .
De Broglie also proposed that the wavelength λ for a moving particle was equal to h / p where p is the particle's momentum.
The period (one cycle of the wave) is equal to 1/ f .
This precise Compton periodicity of a matter wave is said to be the necessary condition for a clock, with the implication that any such matter particle may be regarded as a fundamental clock. This proposal has been referred to as "A rock is a clock." [ 2 ]
In his paper, "Quantum mechanics, matter waves and moving clocks", Müller has suggested that "The description of matter waves as matter-wave clocks ... has recently been applied to tests of general relativity, matter-wave experiments, the foundations of quantum mechanics, quantum space-time decoherence, the matter wave clock/mass standard, and led to a discussion on the role of the proper time in quantum mechanics. It is generally covariant and thus well-suited for use in curved space-time, e.g., gravitational waves." [ 3 ]
In his paper, "Quantum mechanics, matter waves and moving clocks", Müller has suggested that "[The model] has also given rise to a fair amount of controversy. Within the broader context of quantum mechanics ... this description has been abandoned, in part because it could not be used to derive a relativistic quantum theory, or explain spin. The descriptions that replaced the clock picture achieve these goals, but do not motivate the concepts used. ... We shall construct a ... description of matter waves as clocks. We will thus arrive at a space-time path integral that is equivalent to the Dirac equation. This derivation shows that De Broglie's matter wave theory naturally leads to particles with spin-1/2. It relates to Feynman's search for a formula for the amplitude of a path in 3+1 space and time dimensions which is equivalent to the Dirac equation. It yields a new intuitive interpretation of the propagation of a Dirac particle and reproduces all results of standard quantum mechanics, including those supposedly at odds with it. Thus, it illuminates the role of the gravitational redshift and the proper time in quantum mechanics." [ 3 ]
The theoretical idea of matter waves as clocks has caused some controversy, and has attracted strong criticism. [ 4 ] [ 5 ]
An atom interferometer uses a small difference in waves associated with two atoms to create an observable interference pattern. Conventionally these waves are associated with the electrons orbiting the atom, but the matter wave theory suggests that the wave associated with the wave–particle duality of the atom itself may alternatively be used.
An experimental device comprises two clouds of atoms, one of which is given a small "kick" from a precisely-tuned laser. This gives it a finite velocity which, according to the matter wave theory lowers its observed frequency. The two clouds are then recombined so that their differing waves interfere, and the maximum output signal is obtained when the frequency difference is an integer number of cycles.
Experiments designed around the idea of interference between matter waves (as clocks) are claimed to have provided the most accurate validation yet of the gravitational redshift predicted by general relativity . A similar atom interferometer forms the heart of the Compton clock .
However, this claimed interpretation of the interferometry function has been criticised. One criticism is that a real Compton oscillator or matter wave does not appear in the design of any actual experiment. [ 6 ] The matter wave interpretation is also said to be flawed. [ 4 ] [ 5 ]
A functional timepiece designed on the basis of matter wave interferometry is called a Compton clock. [ 2 ]
The frequency of the wave associated with a massive particle, such as an atom, is too high to be used directly in a practical clock and its period and wavelength are too short. A practical device makes use of the twin paradox arising from the theory of relativity , where a moving particle ages more slowly than a stationary one. The moving particle-wave therefore has a slightly lower frequency. Using interferometry , the difference or "beat frequency" between the two frequencies can be accurately measured and this beat frequency can be used as a basis for keeping time. [ 3 ]
The technique used in the devices can theoretically be reversed to use time to measure mass. This has been proposed as an opportunity for replacing the platinum-iridium cylinder currently used as the 1 kg reference standard. [ needs update ] [ 2 ] | https://en.wikipedia.org/wiki/Matter_wave_clock |
Mattermost is an open-source , self-hostable online chat service with file sharing, search, and third party application integrations. It is designed as an internal chat for organisations and companies, and mostly markets itself as an open-source alternative to Slack [ 6 ] [ 7 ] and Microsoft Teams .
The code was originally proprietary, as Mattermost was used as an internal chat tool inside SpinPunch, a game developer studio, but was later open-sourced. [ 7 ] The 1.0 was released on October 2, 2015. [ 8 ]
The project is maintained and developed by Mattermost Inc. The company generates funds by selling support services and additional features that are not in the open-source edition.
It was also integrated into GitLab as "GitLab Mattermost". [ 9 ]
In the media, Mattermost is mostly regarded as an alternative to the more popular Slack . [ 10 ] [ 11 ] [ 12 ] [ 13 ] Aside from the in-browser version, there are desktop clients for Windows , MacOS and Linux and mobile apps for iOS and Android .
As of version 6.0 Mattermost includes kanban board and playbook features integrated in main interface. [ 14 ] [ 15 ] | https://en.wikipedia.org/wiki/Mattermost |
Matteucci effect is the creation of a helical anisotropy of the magnetic susceptibility of a magnetostrictive material when subjected to a torque . It is one of the magnetomechanical effects , which is thermodynamically inverse to Wiedemann effect . [ 1 ] This effect was described by Carlo Matteucci in 1858. It is observable in amorphous wires with helical domain structure, which can be obtained by twisting the wire, or annealing under twist. The effect is most distinct in the so-called 'dwarven alloys' (called so because of the historical cobalt element etymology), with cobalt as main substituent. [ 2 ] [ 3 ] | https://en.wikipedia.org/wiki/Matteucci_effect |
Matthew John Fuchter FRSC is a British chemist who is a professor of chemistry at the University of Oxford . [ 1 ] His research focuses on the development and application of novel functional molecular systems to a broad range of areas; from materials to medicine. He has been awarded both the Harrison-Meldola Memorial Prize (2014) and the Corday–Morgan Prizes (2021) of the Royal Society of Chemistry . [ 2 ] In 2020 he was a finalist for the Blavatnik Awards for Young Scientists .
Fuchter earned a master's degree (MSci) in chemistry at the University of Bristol , where he was awarded the Richard Dixon prize. [ 3 ] It was during his undergraduate degree that he first became interested in organic synthesis. [ 4 ] As a graduate student he moved to Imperial College London , where he worked with Anthony Barrett on the synthesis and applications of porphyrazines , including as therapeutic agents . [ 5 ] [ 6 ] During his doctoral studies Barrett and Fuchter collaborated with Brian M. Hoffman at Northwestern University . [ 2 ]
After completing his PhD, Fuchter moved to Australia , for postdoctoral research at CSIRO and the University of Melbourne , where he worked with Andrew Bruce Holmes . [ 2 ] [ 7 ] In 2007 Fuchter returned to the United Kingdom , where he began his independent academic career at the School of Pharmacy, University of London (now UCL School of Pharmacy ). [ 2 ] Less than one year later he was appointed a Lecturer at Imperial College London , where he was promoted to Reader (Associate Professor) in 2015 and professor in 2019. [ 2 ] [ 8 ] Fuchter develops photoswitchable molecules, chiral materials and new pharmaceuticals.
Fuchter is interested in how considerations of chirality can be applied to the development of novel approaches in chiral optoelectronic materials and devices. [ 2 ] In particular, he focusses on the introduction of chiral-optical (so-called chiroptical) properties into optoelectronic materials. [ 2 ] Amongst these materials, Fuchter has extensively evaluated the use of chiral small molecule additives ( helicenes [ 7 ] ) to induce chiroptical properties into light emitting polymers for the realisation of chiral ( circularly polarised , CP) OLEDs . [ 2 ] [ 7 ] He has also investigated the application of such materials in circularly polarised photodetectors , which are devices that are capable of detecting circularly polarised light. [ 2 ] As well as using chiral functional materials for light emission and detection, Fuchter has investigated the charge transport properties of enantiopure and racemic chiral functional materials.
Fuchter has also developed novel molecular photoswitches – molecules that can be cleanly and reversibly interconverted between two states using light – with a focus on heteroaromatic versions of azobenzene . The arylazopyrazole switches developed by Fuchter out perform the ubiquitous azobenzene switches, demonstrating complete photoswitching in both directions and thermal half-lives of the Z isomer of up to 46 years. Fuchter continues to apply these switches to a range of photoaddressable applications from photopharmacology to energy storage .
Alongside his work on functional material discovery, Fuchter works in medicinal chemistry and develops small molecule ligands that can either inhibit or stimulate the activity of disease relevant proteins. [ 2 ] [ 9 ] While he has worked on many drug targets, he has specialised in proteins involved in the transcriptional and epigenetic processes of disease. A particular interest has been the development of inhibitors for the histone-lysine methyltransferase enzymes in the Plasmodium parasite that causes human malaria. [ 10 ]
In 2018 one of the cancer drugs developed by Fuchter, together with Anthony Barrett, Simak Ali and Charles Coombes entered a phase 1 clinical trial , and as of 2020, it is in phase 2. [ 7 ] [ 11 ] The drug, which was designed using computational chemistry , inhibits the cyclin-dependent kinase 7 (CDK7), a transcriptional regulatory protein that also regulates the cell cycle. Certain cancers rely on CDK7, so inhibition of this enzyme has potential to have a significant impact on cancer pathogenesis. [ 11 ]
In 2024 Fuchter joined the University of Oxford as a Professor of Chemistry and the Sydney Bailey Fellow in Chemistry at St Peter’s College Oxford . [ 1 ]
Fuchter serves on the editorial board of MedChemComm . [ 12 ] He is an elected council member of the Royal Society of Chemistry organic division. [ 13 ] Fuchter is co-director of the Imperial College London Centre for Drug Discovery Science. [ 14 ] | https://en.wikipedia.org/wiki/Matthew_Fuchter |
Matthew Jonathan Rosseinsky is a British academic who is Professor of Inorganic Chemistry at the University of Liverpool . He was awarded the Hughes Medal in 2011 "for his influential discoveries in the synthetic chemistry of solid state electronic materials and novel microporous structures."
He has been awarded the Harrison Memorial Prize (1991), [ 2 ] Corday-Morgan Medal [ 3 ] and Prize (2000) and Tilden Lectureship (2006) [ 4 ] of the Royal Society of Chemistry (RSC). In 2009, he was awarded the inaugural De Gennes Prize by the RSC, a lifetime achievement award in materials chemistry, open internationally. In 2013, he became a Royal Society Research Professor.
In 2017, he was awarded the Davy Medal of the Royal Society for “his advances in the design and discovery of functional materials, integrating the development of new experimental and computational techniques.” He gave the Muetterties Lectures at UC Berkeley and the Lee Lectures at the University of Chicago in 2017. In 2019, he gave the Flack Memorial Lectures of the Swiss Crystallographic Society [ 5 ] and was awarded the Frankland Lectureship by Imperial College London . In 2020, he was made an Honorary Fellow of the Chemical Research Society of India. In 2022, he gave the Davison Lectures at the Massachusetts Institute of Technology , and received the Basolo Award [ 6 ] of the Chicago Section of the American Chemical Society.
He was a member of the Science Minister’s Advanced Materials Leadership Council from 2014-2016, and of the governing Council of the Engineering and Physical Sciences Research Council from 2015-2019.
In 2023, he received the Eni Energy Frontiers Award [ 7 ] for the digital design and discovery of next-generation energy materials from the President of Italy. | https://en.wikipedia.org/wiki/Matthew_Rosseinsky |
In physics , the Matthias rules refers to a historical set of empirical guidelines on how to find superconductors . These rules were authored Bernd T. Matthias who discovered hundreds of superconductors using these principles in the 1950s and 1960s. Deviations from these rules have been found since the end of the 1970s with the discovery of unconventional superconductors .
Superconductivity was first discovered in solid mercury in 1911 by Heike Kamerlingh Onnes and Gilles Holst , who had developed new techniques to reach near- absolute zero temperatures. [ 1 ] [ 2 ] [ 3 ]
In subsequent decades, superconductivity was found in several other materials; In 1913, lead at 7 K, in 1930's niobium at 10 K, and in 1941 niobium nitride at 16 K.
In 1933, Walther Meissner and Robert Ochsenfeld discovered that superconductors expelled applied magnetic fields, a phenomenon that has come to be known as the Meissner effect .
Bernd T. Matthias and John Kenneth Hulm were encouraged by Enrico Fermi to start a systematic experimental investigation in the 1950s, looking for superconductors in different elements and compounds. For this reason, they developed a technique based on the Meissner effect. [ 4 ] [ 5 ]
In collaboration with Theodore H. Geballe , Matthias broke the record in 1954, with the discovery of superconductivity in niobium–tin (Nb 3 Sn) which had the highest known transition temperature of about 18 K. [ 6 ] [ 5 ] Later Matthias would try to come up with general empirical properties to find superconducting alloys. In the same year he published a first version of his famous guidelines which came to be known, as the "Mathias rules". [ 5 ] [ 7 ] Matthias was able to show in 1962 that some deviations from his rules where due to impurities or defects in the materials. [ 5 ] Using his rules, Matthias and collaborators found in 1965 that niobium–germanium (Nb 3 Ge) with a record critical temperature above 20 K. [ 8 ] [ 9 ]
Matthias published a first outline his rules in 1957. [ 5 ] [ 10 ] A successful microscopic theory of superconductivity would no come up until the same year, with the development of the BCS theory by John Bardeen , Leon Cooper , and John Robert Schrieffer . [ 11 ]
Geballe and Matthias won the Oliver E. Buckley Condensed Matter Prize in 1970 for "For their joint experimental investigations of superconductivity which have challenged theoretical understanding and opened up the technology of high field superconductors." [ 12 ]
One of the first deviations of Matthias' rules was found with the discovery of superconductivity in molybdenum sulfide and selenides . Matthias postulated an additional criterion in 1976 at the Rochester Conference on superconductivity to include these materials. [ 13 ]
Another violation of Matthias rules appeared in 1979, with the discovery of heavy fermion superconductors by Frank Steglich [ 14 ] where magnetism was expected to play a role, contrary to the Matthias rules. [ 15 ]
Matthias held the record of highest critical temperature superconductor found until the discovery of high-temperature superconductors were discovered in 1986 by Georg Bednorz and K. Alex Müller . [ 5 ] [ 16 ] [ 17 ] [ 18 ]
The Matthias rules are a set of guidelines to find low temperature superconductors but were never provided in list form by Matthias.
A popular summarized version of these rules reads: [ 19 ] [ 20 ] [ 15 ] [ 8 ]
Rule 2, rules out materials near metal-insulator transition like oxides . Rule 4, rules out material that are in close vicinity to ferromagnetism or antiferromagnetism . [ 18 ] Rule 6 is not an official rule and is often added to indicate skepticism of the theories of the time. [ 15 ]
Other equivalent principles as stated by Matthias, indicate to work mainly with d-electron metals ; with the average number of valence electrons , preferably odd numbers 3, 5, and 7 and high electron density or high electron density of state at the Fermi level . [ 18 ]
In 1976, Mattias added the criterion to include "elements which will not react at all with molybdenum alone form superconducting compounds with Mo 3 S 4 and Mo 3 Se 4 , S or Se " due to deviations in molybdenum compounds. [ 15 ]
It has been argued that all of Matthias' rules have been shown to not be completely valid. [ 19 ] Specially the rules are not valid for high-temperature superconductors , alternative rules for these materials have been suggested. [ 18 ] [ 19 ] | https://en.wikipedia.org/wiki/Matthias_rules |
Matthieu Brouard was a French theologian, mathematician, philosopher and historian, who was born in Saint-Denis near Paris in 1520, and died in Geneva on July 15, 1576. [ 1 ] He is also known as Matthieu Brouart or Béroalde and (in Latin ) as Mattheus Beroaldus . He taught Greek to the young Thomas Bodley and was the father of François Béroalde de Verville .
This biography of a religious figure is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Matthieu_Brouard |
The Mattis–Bardeen theory is a theory that describes the electrodynamic properties of superconductivity . It is commonly applied in the research field of optical spectroscopy on superconductors. [ 1 ] [ 2 ]
It was derived to explain the anomalous skin effect of superconductors. Originally, the anomalous skin effect indicates the non-classical response of metals to high frequency electromagnetic field in low temperature, which was solved by Robert G. Chambers . [ 3 ] At sufficiently low temperatures and high frequencies, the classically predicted skin depth ( normal skin effect ) fails because of the enhancement of the mean free path of the electrons in a good metal. Not only the normal metals, but superconductors also show the anomalous skin effect which has to be considered with the theory of Bardeen, Cooper and Schrieffer (BCS).
The most clear fact the BCS theory gives is the presence of the pairing of two electrons ( Cooper pair ). After the transition to the superconducting state, the superconducting gap 2Δ in the single-particle density of states arises, and the dispersion relation can be described like the one of a semiconductor with band gap 2Δ around the Fermi energy . From the Fermi golden rule , the transition probabilities can be written as
where N s {\displaystyle N_{s}} is the density of states. And M s {\displaystyle M_{s}} is the matrix element of an interaction Hamiltonian H 1 {\displaystyle H_{1}} where
In the superconducting state, each term of the Hamiltonian is dependent, because of the superconducting state consists of a phase-coherent superposition of occupied one-electron states, whereas it is independent in the normal state. Therefore, there appear interference terms in the absolute square of the matrix element. The result of the coherence changes the matrix element M s {\displaystyle M_{s}} into the matrix element M {\displaystyle M} of single electron and the coherence factors F (Δ, E , E' ).
Then, the transition rate is
where the transition rate can be translated to real part of the complex conductivity, σ 1 {\displaystyle \sigma _{1}} , because the electrodynamic energy absorption is proportional to the σ 1 E 2 {\displaystyle \sigma _{1}E^{2}} .
In finite temperature condition, the response of electrons due to the incident electromagnetic wave can be regarded as two parts, the “superconducting” and “normal” electrons. The first one corresponds to the superconducting ground state and the next to the thermally excited electrons from the ground state. This picture is the so-called "two-fluid" model. If we consider the “normal” electrons, the ratio of the optical conductivity to that of the normal state is
where Θ ( x ) {\displaystyle \Theta (x)} is the Heaviside theta function.
The first term of the upper equation is the contribution of "normal" electrons, and the second term is due to the superconducting electrons.
The calculated optical conductivity breaks the sum rule that the spectral weight should be conserved through the transition. This result implies that the missing area of the spectral weight is concentrated in the zero frequency limit, corresponding to the dirac delta function (which covers the conduction of the superconducting condensate, i.e. the Cooper pairs). Many experimental data supports the prediction. This story on electrodynamics of superconductivity is the starting point of optical study. Because any superconducting T c never exceeds 200K and the superconducting gap value is about the 3.5 k B T , microwave or far-infrared spectroscopy is suitable technique applying this theory. With the Mattis–Bardeen theory, we can derive fruitful properties of the
superconducting gap, like gap symmetry. | https://en.wikipedia.org/wiki/Mattis–Bardeen_theory |
Mattr Corp. , formerly known as Shawcor Ltd. , is a Canadian materials technology company, based in Toronto, Ontario, and listed on the Toronto Stock Exchange . [ 1 ] It specializes in providing services to the pipeline sector of the oil and gas market. It is one of the largest pipe-coating providers in the world. [ 2 ] In 2017, it had a revenue of $1.56 billion. [ 3 ] It was founded by Francis Shaw, the father of the founder of Shaw Communications , and there was substantial ownership in both companies by the Shaw family for many years. [ 4 ]
Shawcor was founded by Francis Shaw in rural Lambton County . [ 5 ] It was originally a construction company, but later expanded into pipeline coatings, cable television, and numerous other businesses. In the 1970s, the business was split between Francis's sons Leslie and JR Shaw ; Leslie inherited the pipeline services business, which became the current Shawcor, while JR Shaw inherited the western cable business, which became Shaw Communications . [ 5 ] Under Leslie's leadership, the company grew significantly; by 2002, it had a market capitalization of $1 billion, and 43 plants in 20 countries. [ 4 ] Around that time, Leslie ceded his leadership of the business to his daughter, Virginia Shaw. [ 6 ]
In 2012, Shawcor suggested that it might consider putting itself up for sale. [ 2 ] At the time, the company had a market capitalization of about $3 billion. The company eventually decided not to sell, causing to share price to fall 15%. [ 7 ] In 2013, it eliminated its dual class share structure, under which the Shaw family controlled the majority of voting shares. [ 8 ]
Shawcor operates through 5 business units: Pipeline Performance (Bredero-Shaw, Socotherm, Canusa-CPS, Dhatec), Composite Production Systems (FlexPipe, Global Poly, FlexFlow, ZCL/Xerxes, Parabeam), Integrity Management (Shaw Pipeline Services, Shawcor Inspection Services, Lake Superior Consulting), Oilfield Asset Management (Guardian, CSI), and Connection Systems (DSG-Canusa, Shawflex). [ 9 ] It has about 100 manufacturing and service facilities and sales offices and 6000 employees in 25 countries. [ 9 ]
The pipe coating solutions division, which is the largest division in the company, was formerly part of Halliburton , an international oilfield services conglomerate. At that time, it was named Bredero Price. Shawcor acquired the portion of the division it did not already own for $200 million in 2002. [ 10 ] | https://en.wikipedia.org/wiki/Mattr |
Mature messenger RNA , often abbreviated as mature mRNA is a eukaryotic RNA transcript that has been spliced and processed and is ready for translation in the course of protein synthesis . Unlike the eukaryotic RNA immediately after transcription known as precursor messenger RNA , [ 1 ] mature mRNA consists exclusively of exons and has all introns removed.
Mature mRNA is also called "mature transcript", "mature RNA" or "mRNA".
The production of a mature mRNA molecule occurs in 3 steps: [ 2 ] [ 3 ]
During capping , a 7-methylguanosine residue is attached to the 5'-terminal end of the primary transcripts.This is otherwise known as the GTP or 5' cap. The 5' cap is used to increase mRNA stability. Further, the 5' cap is used as an attachment point for ribosomes . [ 1 ] Beyond this, the 5' cap has also been shown to have a role in exporting the mature mRNA from the nucleus and into the cytoplasm. [ 4 ]
In polyadenylation , a poly- adenosine tail of about 200 adenylate residues is added by a nuclear polymerase post-transcriptionally. This is known as a Poly-A tail and is used for stability and guidance, so that the mRNA can exit the nucleus and find the ribosome. [ 5 ] It is added at a polyadenylation site in the 3' untranslated region of the mRNA, cleaving the mRNA in the process. [ 6 ] When there are multiple polyadenylation sites on the same mRNA molecule, alternative polyadenylation can occur. [ 7 ] See polyadenylation for further details.
Pre-mRNA has both introns and exons. As a part of the maturation process, RNA splicing removes the non-coding RNA introns leaving behind the exons, which are then spliced and joined together to form the mature mRNA. [ 3 ] [ 8 ] Splicing is conducted by the spliceosome . The spliceosome is a large ribonucleoprotein which cleaves the RNA at the splicing site and recombines the exons of the RNA. Similar to polyadenylation, alternative splicing can occur, resulting in several possible proteins being translated from the same portion of DNA. [ 9 ] See RNA Splicing for further details. | https://en.wikipedia.org/wiki/Mature_messenger_RNA |
In petroleum geology , the maturity of a rock is a measure of its state in terms of hydrocarbon generation. Maturity is established using a combination of geochemical and basin modelling techniques.
Rocks with high total organic carbon , (termed source rocks ), will alter under increasing temperature such that the organic molecules slowly mature into hydrocarbons (see diagenesis ). Source rocks are therefore broadly categorised as immature (no hydrocarbon generation), sub-mature (limited hydrocarbon generation), mature (extensive hydrocarbon generation) and overmature (most hydrocarbons have been generated).
The maturity of a source rock can also be used as an indicator of its hydrocarbon potential . That is, if a rock is sub-mature, then it has a much higher potential to generate further hydrocarbons than one that is overmature.
This article related to natural gas, petroleum or the petroleum industry is a stub . You can help Wikipedia by expanding it .
This geochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Maturity_(geology) |
A Maucha diagram , or Maucha symbol , is a graphical representation of the major cations and anions in a chemical sample. R. Maucha [ 1 ] published the symbol in 1932. [ 2 ]
It is mainly used by biologists and chemists for quickly recognising samples by their chemical composition. [ 3 ] [ 4 ] The symbol is similar in concept to the Stiff diagram . It conveys similar ionic information to the Piper diagram , though in a more compact format that is suitable as a map symbol or for showing changes with time. The Maucha diagram is a special case of the Radar chart and overcomes some of the limitations of the Pie chart by having equal angles for all variables and consistently showing each variable in the same position.
The star shape comprises eight kite-shaped polygons, the area of each of which is proportional to the concentration of an ion in milliequivalents per litre. The anions carbonate, bicarbonate, chloride and sulphate are on the left, while the cations potassium, sodium, calcium and magnesium are on the right. The total ionic concentration adds up to the area of the background circle, the total anion concentration adds up to the left semicircle and the total cation concentration adds up to the right semicircle. A method for drawing the diagram in R is available on GitHub. [ 5 ]
Broch and Yake modified Maucha's original fixed-size diagram by scaling for concentration. [ 6 ]
Further scaling using the logarithm of the ionic concentration enables the plotting of a wide range of concentrations on a single map. [ 7 ] [ 8 ] [ 9 ] | https://en.wikipedia.org/wiki/Maucha_diagram |
In classical mechanics , Maupertuis's principle (named after Pierre Louis Maupertuis , 1698 – 1759) states that the path followed by a physical system is the one of least length (with a suitable interpretation of path and length ). [ 1 ] It is a special case of the more generally stated principle of least action . Using the calculus of variations , it results in an integral equation formulation of the equations of motion for the system.
Maupertuis's principle states that the true path of a system described by N {\displaystyle N} generalized coordinates q = ( q 1 , q 2 , … , q N ) {\displaystyle \mathbf {q} =\left(q_{1},q_{2},\ldots ,q_{N}\right)} between two specified states q 1 {\displaystyle \mathbf {q} _{1}} and q 2 {\displaystyle \mathbf {q} _{2}} is a minimum or a saddle point [ 2 ] of the abbreviated action functional ,
S 0 [ q ( t ) ] = d e f ∫ p ⋅ d q , {\displaystyle {\mathcal {S}}_{0}[\mathbf {q} (t)]\ {\stackrel {\mathrm {def} }{=}}\ \int \mathbf {p} \cdot d\mathbf {q} ,} where p = ( p 1 , p 2 , … , p N ) {\displaystyle \mathbf {p} =\left(p_{1},p_{2},\ldots ,p_{N}\right)} are the conjugate momenta of the generalized coordinates, defined by the equation p k = d e f ∂ L ∂ q ˙ k , {\displaystyle p_{k}\ {\stackrel {\mathrm {def} }{=}}\ {\frac {\partial L}{\partial {\dot {q}}_{k}}},} where L ( q , q ˙ , t ) {\displaystyle L(\mathbf {q} ,{\dot {\mathbf {q} }},t)} is the Lagrangian function for the system. In other words, any first-order perturbation of the path results in (at most) second-order changes in S 0 {\displaystyle {\mathcal {S}}_{0}} . Note that the abbreviated action S 0 {\displaystyle {\mathcal {S}}_{0}} is a functional (i.e. a function from a vector space into its underlying scalar field), which in this case takes as its input a function (i.e. the paths between the two specified states).
For many systems, the kinetic energy T {\displaystyle T} is quadratic in the generalized velocities q ˙ {\displaystyle {\dot {\mathbf {q} }}} T = 1 2 q ˙ M q ˙ ⊺ {\displaystyle T={\frac {1}{2}}{\dot {\mathbf {q} }}\ \mathbf {M} \ {\dot {\mathbf {q} }}^{\intercal }} although the mass tensor M {\displaystyle \mathbf {M} } may be a complicated function of the generalized coordinates q {\displaystyle \mathbf {q} } . For such systems, a simple relation relates the kinetic energy, the generalized momenta and the generalized velocities 2 T = p ⋅ q ˙ {\displaystyle 2T=\mathbf {p} \cdot {\dot {\mathbf {q} }}} provided that the potential energy V ( q ) {\displaystyle V(\mathbf {q} )} does not involve the generalized velocities. By defining a normalized distance or metric d s {\displaystyle ds} in the space of generalized coordinates d s 2 = d q M d q ⊺ {\displaystyle ds^{2}=d\mathbf {q} \ \mathbf {M} \ d\mathbf {q^{\intercal }} } one may immediately recognize the mass tensor as a metric tensor . The kinetic energy may be written in a massless form T = 1 2 ( d s d t ) 2 {\displaystyle T={\frac {1}{2}}\left({\frac {ds}{dt}}\right)^{2}} or, 2 T d t = 2 T d s . {\displaystyle 2Tdt={\sqrt {2T}}\ ds.}
Therefore, the abbreviated action can be written S 0 = d e f ∫ p ⋅ d q = ∫ d s 2 E tot − V ( q ) {\displaystyle {\mathcal {S}}_{0}\ {\stackrel {\mathrm {def} }{=}}\ \int \mathbf {p} \cdot d\mathbf {q} =\int ds\,{\sqrt {2}}{\sqrt {E_{\text{tot}}-V(\mathbf {q} )}}} since the kinetic energy T = E tot − V ( q ) {\displaystyle T=E_{\text{tot}}-V(\mathbf {q} )} equals the (constant) total energy E tot {\displaystyle E_{\text{tot}}} minus the potential energy V ( q ) {\displaystyle V(\mathbf {q} )} . In particular, if the potential energy is a constant, then Jacobi's principle reduces to minimizing the path length s = ∫ d s {\textstyle s=\int ds} in the space of the generalized coordinates, which is equivalent to Hertz's principle of least curvature .
Hamilton's principle and Maupertuis's principle are occasionally confused with each other and both have been called the principle of least action . They differ from each other in three important ways:
Maupertuis was the first to publish a principle of least action , as a way of adapting Fermat's principle for waves to a corpuscular (particle) theory of light. [ 3 ] : 96 Pierre de Fermat had explained Snell's law for the refraction of light by assuming light follows the path of shortest time , not distance. This troubled Maupertuis, since he felt that time and distance should be on an equal footing: "why should light prefer the path of shortest time over that of distance?" Maupertuis defined his action as ∫ v d s {\textstyle \int v\,ds} , which was to be minimized over all paths connecting two specified points. Here v {\displaystyle v} is the velocity of light the corpuscular theory. Fermat had minimized ∫ d s / v {\textstyle \int \,ds/v} where v {\displaystyle v} is wave velocity; the two velocities are reciprocal so the two forms are equivalent.
In 1751, Maupertuis's priority for the principle of least action was challenged in print ( Nova Acta Eruditorum of Leipzig) by an old acquaintance, Johann Samuel Koenig , who quoted a 1707 letter purportedly from Gottfried Wilhelm Leibniz to Jakob Hermann that described results similar to those derived by Leonhard Euler in 1744.
Maupertuis and others demanded that Koenig produce the original of the letter to authenticate its having been written by Leibniz. Leibniz died in 1716 and Hermann in 1733, so neither could vouch for Koenig. Koenig claimed to have the letter copied from the original owned by Samuel Henzi , and no clue as to the whereabouts of the original, as Henzi had been executed in 1749 for organizing the Henzi conspiracy for overthrowing the aristocratic government of Bern . [ 4 ] Subsequently, the Berlin Academy under Euler's direction declared the letter to be a forgery [ 5 ] and that Maupertuis, could continue to claim priority for having invented the principle. Curiously Voltaire got involved in the quarrel by composing Diatribe du docteur Akakia ("Diatribe of Doctor Akakia") to satirize Maupertuis' scientific theories (not limited to the principle of least action). While this work damaged Maupertuis's reputation, his claim to priority for least action remains secure. [ 4 ] | https://en.wikipedia.org/wiki/Maupertuis's_principle |
In mathematics , the Maurer–Cartan form for a Lie group G is a distinguished differential one-form on G that carries the basic infinitesimal information about the structure of G . It was much used by Élie Cartan as a basic ingredient of his method of moving frames , and bears his name together with that of Ludwig Maurer .
As a one-form, the Maurer–Cartan form is peculiar in that it takes its values in the Lie algebra associated to the Lie group G . The Lie algebra is identified with the tangent space of G at the identity, denoted T e G . The Maurer–Cartan form ω is thus a one-form defined globally on G , that is, a linear mapping of the tangent space T g G at each g ∈ G into T e G . It is given as the pushforward of a vector in T g G along the left-translation in the group:
A Lie group acts on itself by multiplication under the mapping
A question of importance to Cartan and his contemporaries was how to identify a principal homogeneous space of G . That is, a manifold P identical to the group G , but without a fixed choice of unit element. This motivation came, in part, from Felix Klein 's Erlangen programme where one was interested in a notion of symmetry on a space, where the symmetries of the space were transformations forming a Lie group. The geometries of interest were homogeneous spaces G / H , but usually without a fixed choice of origin corresponding to the coset eH .
A principal homogeneous space of G is a manifold P abstractly characterized by having a free and transitive action of G on P . The Maurer–Cartan form [ 1 ] gives an appropriate infinitesimal characterization of the principal homogeneous space. It is a one-form defined on P satisfying an integrability condition known as the Maurer–Cartan equation. Using this integrability condition, it is possible to define the exponential map of the Lie algebra and in this way obtain, locally, a group action on P .
Let g ≅ T e G be the tangent space of a Lie group G at the identity (its Lie algebra ). G acts on itself by left translation
such that for a given g ∈ G we have
and this induces a map of the tangent bundle to itself: ( L g ) ∗ : T h G → T g h G . {\displaystyle (L_{g})_{*}:T_{h}G\to T_{gh}G.} A left-invariant vector field is a section X of T G such that [ 2 ]
The Maurer–Cartan form ω is a g -valued one-form on G defined on vectors v ∈ T g G by the formula
If G is embedded in GL( n ) by a matrix valued mapping g =( g ij ) , then one can write ω explicitly as
In this sense, the Maurer–Cartan form is always the left logarithmic derivative of the identity map of G .
If we regard the Lie group G as a principal bundle over a manifold consisting of a single point then the Maurer–Cartan form can also be characterized abstractly as the unique principal connection on the principal bundle G . Indeed, it is the unique g = T e G valued 1 -form on G satisfying
where R h * is the pullback of forms along the right-translation in the group and Ad( h ) is the adjoint action on the Lie algebra.
If X is a left-invariant vector field on G , then ω ( X ) is constant on G . Furthermore, if X and Y are both left-invariant, then
where the bracket on the left-hand side is the Lie bracket of vector fields , and the bracket on the right-hand side is the bracket on the Lie algebra g . (This may be used as the definition of the bracket on g .) These facts may be used to establish an isomorphism of Lie algebras
By the definition of the exterior derivative , if X and Y are arbitrary vector fields then
Here ω ( Y ) is the g -valued function obtained by duality from pairing the one-form ω with the vector field Y , and X ( ω ( Y )) is the Lie derivative of this function along X . Similarly Y ( ω ( X )) is the Lie derivative along Y of the g -valued function ω ( X ) .
In particular, if X and Y are left-invariant, then
so
but the left-invariant fields span the tangent space at any point (the push-forward of a basis in T e G under a diffeomorphism is still a basis), so the equation is true for any pair of vector fields X and Y . This is known as the Maurer–Cartan equation . It is often written as
Here [ω, ω] denotes the bracket of Lie algebra-valued forms .
One can also view the Maurer–Cartan form as being constructed from a Maurer–Cartan frame . Let E i be a basis of sections of T G consisting of left-invariant vector fields, and θ j be the dual basis of sections of T * G such that θ j ( E i ) = δ i j , the Kronecker delta . Then E i is a Maurer–Cartan frame, and θ i is a Maurer–Cartan coframe .
Since E i is left-invariant, applying the Maurer–Cartan form to it simply returns the value of E i at the identity. Thus ω ( E i ) = E i ( e ) ∈ g . Thus, the Maurer–Cartan form can be written
Suppose that the Lie brackets of the vector fields E i are given by
The quantities c ij k are the structure constants of the Lie algebra (relative to the basis E i ). A simple calculation, using the definition of the exterior derivative d , yields
so that by duality
This equation is also often called the Maurer–Cartan equation . To relate it to the previous definition, which only involved the Maurer–Cartan form ω , take the exterior derivative of (1) :
The frame components are given by
which establishes the equivalence of the two forms of the Maurer–Cartan equation.
Maurer–Cartan forms play an important role in Cartan's method of moving frames . In this context, one may view the Maurer–Cartan form as a 1 -form defined on the tautological principal bundle associated with a homogeneous space . If H is a closed subgroup of G , then G / H is a smooth manifold of dimension dim G − dim H . The quotient map G → G / H induces the structure of an H -principal bundle over G / H . The Maurer–Cartan form on the Lie group G yields a flat Cartan connection for this principal bundle. In particular, if H = { e }, then this Cartan connection is an ordinary connection form , and we have
which is the condition for the vanishing of the curvature.
In the method of moving frames, one sometimes considers a local section of the tautological bundle, say s : G / H → G . (If working on a submanifold of the homogeneous space, then s need only be a local section over the submanifold.) The pullback of the Maurer–Cartan form along s defines a non-degenerate g -valued 1 -form θ = s * ω over the base. The Maurer–Cartan equation implies that
Moreover, if s U and s V are a pair of local sections defined, respectively, over open sets U and V , then they are related by an element of H in each fibre of the bundle:
The differential of h gives a compatibility condition relating the two sections on the overlap region:
where ω H is the Maurer–Cartan form on the group H .
A system of non-degenerate g -valued 1 -forms θ U defined on open sets in a manifold M , satisfying the Maurer–Cartan structural equations and the compatibility conditions endows the manifold M locally with the structure of the homogeneous space G / H . In other words, there is locally a diffeomorphism of M into the homogeneous space, such that θ U is the pullback of the Maurer–Cartan form along some section of the tautological bundle. This is a consequence of the existence of primitives of the Darboux derivative . | https://en.wikipedia.org/wiki/Maurer–Cartan_form |
Maurice Henri Léonard Pirenne (30 May 1912, Verviers –11 October 1978, Oxford ) was a Belgian scientist known for his work in vision physiology.
Pirenne was born to Maria (née Duesberg) and artist Maurice Lucien Henri Joseph Marie Pirenne on 30 May 1912 in Verviers , Belgium. His uncles were medievalist historian, Henri Pirenne and anatomist and cytologist Jules Duesberg [ fr ] . Pirenne's lifelong interest in drawing and painting, nurtured by his artist father, underscored his fascination with the convergence of visual physiology and artistic expression. While still at school he read Brücke and Helmholtz on the optics of painting.
After earning his Doctor of Science degree from Liege in 1937 and supported by a grant from the Belgian government, he engaged in a year of research in molecular physics under Peter Debye 's mentorship, attending seminars led by Victor Henri in which he established connections with significant fellow students. A pivotal phase of his career was the next three years, 1938–40, spent at Columbia University in New York as a Fellow of the Belgian American Educational Foundation where he collaborated with Selig Hecht to explore the biophysics of vision. With Hecht, Pirenne investigated iris contraction in the nocturnal long-eared owl in reaction to infrared radiation . [ 1 ] This experience significantly influenced his future devotion to the biophysics of vision. [ 2 ]
After experiments they reported to the American Association for the Advancement of Science that received attention oil the media, [ 3 ] [ 4 ] [ 5 ] in 1942, a joint paper authored by Hecht, Shlaer, and Pirenne marked a turning point in the understanding of visual perception near the absolute threshold level by measuring the minimum number of photons the human eye can detect 60% of the time. [ 6 ] [ 7 ] [ 8 ] This paper highlighted that the perceived variability, previously attributed to biological causes, predominantly stemmed from physical fluctuations in the small quantity of light quanta absorbed by the visual photo-pigment. Pirenne's subsequent research revolved around the visual threshold and its correlation with visual acuity.
During WW2 from March 1941, he had to break with science and join the Belgian Forces marshalled in Canada, as a reserve officer, and in June of that year he was in Great Britain as secretary-treasurer of the Central Welfare Committee of the Belgian Land Forces. [ 9 ] On his return to England, Pirenne's intricate neurophysiological studies of 'on' and 'off' neuronal units and their interactions found practical application in screening military personnel for night blindness which he carried out there until 1945. [ 9 ] [ 10 ]
Pirenne employed his investigations of the senses in a physiological approach to the philosophical mind-body problem , [ 11 ] and worked in academic positions in Cambridge and was appointed ICI research fellow at London University in 1945 during which he published The diffraction of X-rays and electrons by free molecules [ 12 ] in 1946, then Aberdeen , where he lectured in physiology 1948–1955 while continuing to write on his investigation of visual thresholds, [ 13 ] [ 14 ] before joining the University Laboratory of Physiology at Oxford in 1955.
His appointment as a fellow with Wolfson College recognised his teaching methods, remembered for their hands-on demonstrations and pragmatic approach, based on his meticulous preparation.
Engaged briefly to Margaret Billinghurst in 1946, [ 15 ] Pirenne married, on 16 May 1947, Katherine ('Kathy') Alice Mary Clutton, born in Devonport and they remained partners until the end of his life. [ 9 ] In 1948 he was naturalised as a British citizen. [ 9 ]
Pirenne published on the relation of optics to art, notably in the 1952 essay "The scientific basis of Leonardo da Vinci's theory of perspective." [ 16 ] His 1970 work, Optics, Painting and Photography , investigated optical and perspective effects in trompe-l'oeil art and photography, analysed through imagery from a pinhole camera . In it he notably [ 17 ] refutes Erwin Panofsky's claim [ 18 ] that due to the curvature of the retina, the geometrical construction of perspective, (which provides an image on a plane) does not correspond to what is actually perceived and should also use curves—to which Pirenne responds that;
...the fact that the retina, and perforce the retinal image, are curved [...] has led some authors to the idea that a truly 'physiological' perspective should consist of some kind of pseudo-development upon the picture plane of an image curved in shape like the retinal image, which allegedly would lead to systems of 'curvilinear perspective'. But, first, the retinal image is not what we see: what we see is the external world. Secondly, the geometrical construction of such a pseudo-development remains obscure--unless it leads back to central, 'rectilinear', perspective. It would be pointless to reiterate the argument that central perspective, in which straight lines are never projected as curves on a plane, is the only method which is capable of producing a retinal image having the same shape as the retinal image of the actual obiects depicted. [ 19 ]
Pirenne's final publication in 1975, titled Vision and Art , continued his explorations between visual perception and its artistic interpretation.
Amongst his eighty publications, Pirenne's 1948 Vision and the Eye , [ 20 ] remained an authoritative and accessible introduction to the subject. His stature as an international authority in visual physiology was affirmed through recognition such as a Doctor of Science degree from Cambridge in 1972 and his appointment as a Foreign Member of the Royal Belgian Academy of Sciences . He died in Oxford on 11 October 1978. | https://en.wikipedia.org/wiki/Maurice_Henri_Léonard_Pirenne |
Maurice Newman was a painter, sculptor, model maker and photographer. He was the son of Abraham Newman and Tobi Schmukler, and was born in Lithuania in 1898. He was married to Edythe Brenda Tichell from 1930 to his death in 1977. He had one daughter, Rachel Newman .
In his teens, Newman left Lithuania to live in Switzerland , and acted as a messenger, delivering messages between clandestine lovers; he spoke Russian, Lithuanian, Polish, German and Yiddish. He then lived in England and South Africa, attending the National School of Arts in Johannesburg . In the early 1920s, Newman migrated to the U.S., lived in Boston , and worked in the Newton offices of the Bachrach Studios . After a brief stint as a retoucher at the White Studios in New York City , he returned to Boston to work as a commercial artist while attending the Vesper George School of Art , the School of the Museum of Fine Arts (now the School of the Museum of Fine Arts at Tufts ), and the Woodbury School of Art.
In 1940, Newman was employed as a model maker by Federal Works of Art Passive Defense Project ( Federal Art Project ). In 1942 he relocated to Alexandria Virginia as a civilian Army employee to head the model shop in the United States Army Engineer Research and Development Laboratory at Fort Belvoir . During World War II , he constructed dioramas and topographical bombing maps. Following the war, projects shifted to the Cold War and civil defense .
In retirement, Newman was able to fully devote his time to portrait painting, as well as sculptures in wood and aluminum. His aluminum sculpture, one of the first U.S. memorials to the six million Jews martyred by Hitler, was unveiled in 1963 at the Kansas City, Missouri Jewish Community Center; the keynote speaker at the unveiling was former President Harry S. Truman . [ 1 ] [ 2 ] [ 3 ] [ 4 ] His dioramas and miniatures were exhibited at the Boston Children's Museum , 1939 New York World's Fair and the Peabody Essex Museum . | https://en.wikipedia.org/wiki/Maurice_Newman_(artist) |
Mautner's lemma in representation theory , named after Austrian-American mathematician Friederich Mautner , states that if G is a topological group and π a unitary representation of G on a Hilbert space H , then for any x in G , which has conjugates
converging to the identity element e , for a net of elements y , then any vector v of H invariant under all the π( y ) is also invariant under π( x ).
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Mautner's_lemma |
Mauveine , also known as aniline purple and Perkin's mauve , was one of the first synthetic dyes . [ 1 ] [ 2 ] It was discovered serendipitously by William Henry Perkin in 1856 while he was attempting to synthesise the phytochemical quinine for the treatment of malaria . [ 3 ] It is also among the first chemical dyes to have been mass-produced. [ 4 ] [ 5 ]
Mauveine is a mixture of four related aromatic compounds differing in number and placement of methyl groups . Its organic synthesis involves dissolving aniline , p -toluidine , and o -toluidine in sulfuric acid and water in a roughly 1:1:2 ratio, then adding potassium dichromate . [ 6 ]
Mauveine A ( C 26 H 23 N + 4 X − ) incorporates 2 molecules of aniline , one of p -toluidine, and one of o -toluidine. Mauveine B ( C 27 H 25 N + 4 X − ) incorporates one molecule each of aniline, p -toluidine, and two of o -toluidine. In 1879, Perkin showed mauveine B related to safranines by oxidative / reductive loss of the p -tolyl group. [ 7 ] In fact, safranine is a 2,8-dimethyl phenazinium salt, whereas the parasafranine produced by Perkin is presumed [ 8 ] to be the 1,8- (or 2,9-) dimethyl isomer .
The molecular structure of mauveine proved difficult to determine, finally being identified in 1994. [ 9 ] In 2007, two more were isolated and identified: mauveine B2 , an isomer of mauveine B with methyl on different aryl group, and mauveine C , which has one more p -methyl group than mauveine A. [ 10 ]
In 2008, additional mauveines and pseudomauveines were discovered, bringing the total number of these compounds up to 12. [ 11 ] In 2015 a crystal structure was reported for the first time. [ 12 ]
Mauveine #8D029B #8D029B
In 1856, William Henry Perkin , then age 18, was given a challenge by his professor, August Wilhelm von Hofmann , to synthesize quinine . In one attempt, Perkin oxidized aniline using potassium dichromate , whose toluidine impurities reacted with the aniline and yielded a black solid, suggesting a "failed" organic synthesis. Cleaning the flask with alcohol, Perkin noticed purple portions of the solution.
Suitable as a dye of silk and other textiles , it was patented by Perkin, who the next year opened a dyeworks mass-producing it at Greenford on the banks of the Grand Union Canal in Middlesex . [ 13 ] It was originally called aniline purple . In 1859, it was named mauve in England via the French name for the mallow flower, and chemists later called it mauveine. [ 14 ] Between 1859 and 1861, mauve became a fashion must have. The weekly journal All the Year Round described women wearing the colour as "all flying countryward, like so many migrating birds of purple paradise". [ 15 ] Punch magazine published cartoons poking fun at the huge popularity of the colour “The Mauve Measles are spreading to so serious an extent that it is high time to consider by what means [they] may be checked.” [ 16 ] [ 17 ] [ 18 ]
By 1870, demand succumbed to newer synthetic colors in the synthetic dye industry launched by mauveine.
In the early 20th century, the U.S. National Association of Confectioners permitted mauveine as a food coloring with a variety of equivalent names: rosolan , violet paste , chrome violet , anilin violet , anilin purple , Perkin's violet , indisin , phenamin , purpurin and lydin . [ 19 ]
Laborers in the aniline dye industry were later found to be at increased risk of bladder cancer, specifically transitional cell carcinoma , yet by the 1950s, the synthetic dye industry had helped transform medicine , including cancer treatment. [ 20 ] [ 21 ] [ 22 ] | https://en.wikipedia.org/wiki/Mauveine |
In game theory a max-dominated strategy is a strategy which is not a best response to any strategy profile of the other players. This is an extension to the notion of strictly dominated strategies , which are max-dominated as well.
A strategy s i ∈ S i {\displaystyle s_{i}\in S_{i}} of player i {\displaystyle i} is max-dominated if for every strategy profile of the other players s − i ∈ S − i {\displaystyle s_{-i}\in S_{-i}} there is a strategy s i ′ ∈ S i {\displaystyle s_{i}^{\prime }\in S_{i}} such that u i ( s i ′ , s − i ) > u i ( s i , s − i ) {\displaystyle u_{i}(s_{i}^{\prime },s_{-i})>u_{i}(s_{i},s_{-i})} . This definition means that s i {\displaystyle s_{i}} is not a best response to any strategy profile s − i {\displaystyle s_{-i}} , since for every such strategy profile there is another strategy s i ′ {\displaystyle s_{i}^{\prime }} which gives higher utility than s i {\displaystyle s_{i}} for player i {\displaystyle i} .
If a strategy s i ∈ S i {\displaystyle s_{i}\in S_{i}} is strictly dominated by strategy s i ′ ∈ S i {\displaystyle s_{i}^{\prime }\in S_{i}} then it is also max-dominated , since for every strategy profile of the other players s − i ∈ S − i {\displaystyle s_{-i}\in S_{-i}} , s i ′ {\displaystyle s_{i}^{\prime }} is the strategy for which u i ( s i ′ , s − i ) > u i ( s i , s − i ) {\displaystyle u_{i}(s_{i}^{\prime },s_{-i})>u_{i}(s_{i},s_{-i})} .
Even if s i {\displaystyle s_{i}} is strictly dominated by a mixed strategy it is also max-dominated .
A strategy s i ∈ S i {\displaystyle s_{i}\in S_{i}} of player i {\displaystyle i} is weakly max-dominated if for every strategy profile of the other players s − i ∈ S − i {\displaystyle s_{-i}\in S_{-i}} there is a strategy s i ′ ∈ S i {\displaystyle s_{i}^{\prime }\in S_{i}} such that u i ( s i ′ , s − i ) ≥ u i ( s i , s − i ) {\displaystyle u_{i}(s_{i}^{\prime },s_{-i})\geq u_{i}(s_{i},s_{-i})} . This definition means that s i {\displaystyle s_{i}} is either not a best response or not the only best response to any strategy profile s − i {\displaystyle s_{-i}} , since for every such strategy profile there is another strategy s i ′ {\displaystyle s_{i}^{\prime }} which gives at least the same utility as s i {\displaystyle s_{i}} for player i {\displaystyle i} .
If a strategy s i ∈ S i {\displaystyle s_{i}\in S_{i}} is weakly dominated by strategy s i ′ ∈ S i {\displaystyle s_{i}^{\prime }\in S_{i}} then it is also weakly max-dominated , since for every strategy profile of the other players s − i ∈ S − i {\displaystyle s_{-i}\in S_{-i}} , s i ′ {\displaystyle s_{i}^{\prime }} is the strategy for which u i ( s i ′ , s − i ) ≥ u i ( s i , s − i ) {\displaystyle u_{i}(s_{i}^{\prime },s_{-i})\geq u_{i}(s_{i},s_{-i})} .
Even if s i {\displaystyle s_{i}} is weakly dominated by a mixed strategy it is also weakly max-dominated .
A game G {\displaystyle G} is said to be max-solvable if by iterated elimination of max-dominated strategies only one strategy profile is left at the end.
More formally we say that G {\displaystyle G} is max-solvable if there exists a sequence of games G 0 , . . . , G r {\displaystyle G_{0},...,G_{r}} such that:
Obviously every max-solvable game has a unique pure Nash equilibrium which is the strategy profile left in G r {\displaystyle G_{r}} .
As in the previous part one can define respectively the notion of weakly max-solvable games , which are games for which a game with a single strategy profile can be reached by eliminating weakly max-dominated strategies . The main difference would be that weakly max-dominated games may have more than one pure Nash equilibrium , and that the order of elimination might result in different Nash equilibria.
The prisoner's dilemma is an example of a max-solvable game (as it is also dominance solvable). The strategy cooperate is max-dominated by the strategy defect for both players, since playing defect always gives the player a higher utility, no matter what the other player plays. To see this note that if the row player plays cooperate then the column player would prefer playing defect and go free than playing cooperate and serving one year in jail. If the row player plays defect then the column player would prefer playing defect and serve three years in jail rather than playing cooperate and serving five years in jail.
In any max-solvable game, best-reply dynamics ultimately leads to the unique pure Nash equilibrium of the game. In order to see this, all we need to do is notice that if s 1 , s 2 , s 3 , . . . , s k {\displaystyle s_{1},s_{2},s_{3},...,s_{k}} is an elimination sequence of the game (meaning that first s 1 {\displaystyle s_{1}} is eliminated from the strategy space of some player since it is max-dominated, then s 2 {\displaystyle s_{2}} is eliminated, and so on), then in the best-response dynamics s 1 {\displaystyle s_{1}} will be never played by its player after one iteration of best responses, s 2 {\displaystyle s_{2}} will never be played by its player after two iterations of best responses and so on. The reason for this is that s 1 {\displaystyle s_{1}} is not a best response to any strategy profile of the other players s − i {\displaystyle s_{-i}} so after one iteration of best responses its player must have chosen a different strategy. Since we understand that we will never return to s 1 {\displaystyle s_{1}} in any iteration of the best responses, we can treat the game after one iteration of best responses as if s 1 {\displaystyle s_{1}} has been eliminated from the game, and complete the proof by induction.
It may come by surprise then that weakly max-solvable games do not necessarily converge to a pure Nash equilibrium when using the best-reply dynamics , as can be seen in the game on the right. If the game starts of the bottom left cell of the matrix, then the following best replay dynamics is possible: the row player moves one row up to the center row, the column player moves to the right column, the row player moves back to the bottom row, the column player moves back to the left column and so on. This obviously never converges to the unique pure Nash equilibrium of the game (which is the upper left cell in the payoff matrix ).
Dominance (game theory) | https://en.wikipedia.org/wiki/Max-dominated_strategy |
Max August Zorn ( German: [tsɔʁn] ; June 6, 1906 – March 9, 1993) was a German mathematician . He was an algebraist , group theorist , and numerical analyst . He is best known for Zorn's lemma , a method used in set theory that is applicable to a wide range of mathematical constructs such as vector spaces , and ordered sets amongst others. Zorn's lemma was first postulated by Kazimierz Kuratowski in 1922, and then independently by Zorn in 1935.
Zorn was born in Krefeld , Germany . He attended the University of Hamburg . He received his PhD in April 1930 for a thesis on alternative algebras . He published his findings in Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg . [ 1 ] [ 2 ] Zorn showed that split-octonions could be represented by a mixed-style of matrices called Zorn's vector-matrix algebra .
Max Zorn was appointed to an assistant position at the University of Halle . However, he did not have the opportunity to work there for long as he was forced to leave Germany in 1933 because of policies enacted by the Nazis . According to grandson Eric, "[Max] spoke with a raspy, airy voice most of his life. Few people knew why, because he only told the story after significant prodding, but he talked that way because pro-Hitler thugs who objected to his politics, had battered his throat in a 1933 street fight." [ 3 ]
Zorn immigrated to the United States and was appointed a Sterling Fellow at Yale University . While at Yale, Zorn wrote his paper "A Remark on Method in Transfinite Algebra" [ 4 ] that stated his Maximum Principle, later called Zorn's lemma . It requires a set that contains the union of any chain of subsets to have one chain not contained in any other, called the maximal element . He illustrated the principle with applications in ring theory and field extensions. Zorn's lemma is an alternative expression of the axiom of choice , and thus a subject of interest in axiomatic set theory .
In 1936 he moved to UCLA and remained until 1946. While at UCLA Zorn revisited his study of alternative rings and proved the existence of the nilradical of certain alternative rings . [ 5 ] According to Angus E. Taylor , Max was his most stimulating colleague at UCLA. [ 6 ]
In 1946 Zorn became a professor at Indiana University , where he taught until retiring in 1971. He was thesis advisor for Israel Nathan Herstein .
Zorn died in Bloomington, Indiana , in March 1993, of congestive heart failure. [ 7 ]
Max Zorn married Alice Schlottau and they had one son, Jens, and one daughter, Liz. Jens (born June 19, 1931) is an emeritus professor of physics at the University of Michigan and an accomplished sculptor. Max Zorn's grandson Eric Zorn was a columnist for the Chicago Tribune from 1986 until 2021; after retirement Eric Zorn started a newsletter titled The Picayune Sentinel, [ 8 ] named after the mathematics newsletter that Max Zorn had distributed during his years at Indiana University. Max's great grandson, Alexander Wolken Zorn, received a PhD in mathematics from the University of California Berkeley in 2018. [ 9 ] | https://en.wikipedia.org/wiki/Max_August_Zorn |
The Max Planck Institute for Biological Intelligence ( German : Max-Planck-Institut für biologische Intelligenz ; abbreviated MPI-BI ) is a non-university research institute of the Max Planck Society . The institute is dedicated to basic research on topics in behavioral ecology , evolutionary biology and neuroscience . [ 1 ] Research at the international institute focuses on how animal organisms acquire, store, apply and pass on knowledge about their environment in order to find ever-new solutions to problems and adapt to a constantly changing environment. Model organisms include Drosophila , zebrafish , mice and various bird species.
The board of directors manages the institute, with around 500 employees coming from more than 50 nations. One of the institute's directors is taking over as managing director for a specific time. As of February 2024, Manfred Gahr is the managing director of the institute.
The MPI-BI emerged in January 2022 from the Max Planck Institute of Neurobiology (MPIN) and the Max Planck Institute for Ornithology (MPIO). Following a founding year, the legal founding of the institute took place on 1 January 2023.
The institute has two locations: At the nature-oriented Seewiesen campus, in the municipality of Pöcking near Starnberg , field research is combined with modern methods of behavioral biology. At the Martinsried campus in the southwest of Munich , neuroscientific research is currently the main focus. Here, laboratory experiments are combined with state-of-the-art methods such as optogenetics , connectomics or machine learning .
Scientific research at the Max Planck Institute for Biological Intelligence is thematically divided into seven research departments and 17 independent research groups. Numerous thematic connections between the groups result in a lively exchange and numerous collaborations within the institute.
Biological intelligence describes the ability to achieve complex goals. Animal organisms are able to attain this for example by means of calculation, planning and decision-making—as individuals or in groups. The brains and the associated behavior that we can observe today are the result of evolution due to the successful adaptation to previously mastered challenges.
The goal of research at MPI-BI is to decipher the mechanisms of biological intelligence at its various levels. Research approaches ranging from the investigation of molecular interactions to those of entire groups of individuals. A particular focus lays on animal behavior in its natural environment, as the adaptation of biological systems occurs in harmony with their surroundings. The study of the brain in its natural environment thus provides insight, for example, into how organisms communicate with each other and change their environment, or how social interactions lead to the formation of differentiated societies. | https://en.wikipedia.org/wiki/Max_Planck_Institute_for_Biological_Intelligence |
The Max Planck Institute for Chemical Physics of Solids (MPI CPfS) ( German : Max-Planck-Institut für Chemische Physik fester Stoffe ) is a research institute of the Max Planck Society . Located in Dresden , Germany, the institute primarily conducts basic research in the natural sciences in the fields of physics and chemistry . [ 1 ]
The MPI CPfS conducts research on modern solid state chemistry and physics . Key open questions include "understanding the interplay of topology and symmetry in modern materials, maximising the level of control in material synthesis, understanding the nature of the chemical bond in intermetallic compounds and studying giant response functions at the borderline of standard metallic and superconducting behaviour". [ 2 ]
This article about a chemistry organization is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Max_Planck_Institute_for_Chemical_Physics_of_Solids |
The Max Rubner Institute (MRI), Federal Research Institute of Nutrition and Food is a higher federal authority of the Federal Republic of Germany in the portfolio of the Federal Ministry of Food and Agriculture (BMEL). The research focus is on consumer health protection in the nutrition sector. In this field, the MRI advises the BMEL.
The institute was named after the physician and physiologist Max Rubner . [ 1 ] Until January 1, 2008, the institute was called the Federal Research Institute of Nutrition and Food (BfEL). The president of the MRI is Tanja Schwerdtle. [ 2 ]
The institution's headquarters are in Karlsruhe . Other locations are Kiel , Detmold and Kulmbach . The Münster site has been closed; "the fish quality department is currently still located in Hamburg." In total, the institute employs around 200 scientists at its various sites. [ 3 ]
The MRI is a member of the Working Group of Departmental Research Institutions.
The MRI's predecessor, the Federal Research Institute of Nutrition and Food, was established on January 1, 2004, through the merger of the following institutions:
This food -related article is a stub . You can help Wikipedia by expanding it .
This article about a chemistry organization is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Max_Rubner_Institute |
Max Valier was a 15 kg (33 lb) X-ray telescopic satellite which was built in collaboration with the Technologische Fachoberschule "Max Valier" in Bozen , the Technologische Fachoberschule "Oskar von Miller" in Meran and the Amateurastronomen "Max Valier". The Max Planck Institute for Astrophysics provided the small X-ray telescope μRosi, which allowed amateur astronomers to observe the sky in X-ray wavelength for the first time. It was launched with the help of the OHB in Germany by an Indian PSLV -C38 rocket on June 23, 2017. [ 1 ] [ 2 ] It re-entered the Earth's atmosphere on June 30, 2024 after remaining operative for 7 years and 7 days. [ 3 ]
This spacecraft or satellite related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Max_Valier_(satellite) |
The max q , or maximum dynamic pressure , condition is the point when an aerospace vehicle's atmospheric flight reaches the maximum difference between the fluid dynamics total pressure and the ambient static pressure . For an airplane , this occurs at the maximum speed at minimum altitude corner of the flight envelope . For a space vehicle launch, this occurs at the crossover point between dynamic pressure increasing with speed and static pressure decreasing with increasing altitude. This is an important design factor of aerospace vehicles, since the aerodynamic structural load on the vehicle is proportional to dynamic pressure.
Dynamic pressure q is defined in incompressible fluid dynamics as q = 1 2 ρ v 2 {\displaystyle q={\tfrac {1}{2}}\rho v^{2}} where ρ is the local air density , and v is the vehicle's velocity . The dynamic pressure can be thought of as the kinetic energy density of the air with respect to the vehicle, and for incompressible flow equals the difference between total pressure and static pressure .
This quantity appears notably in the lift and drag equations .
For a car traveling at 56 miles per hour (90 km/h) at sea level (where the air density is about 0.0765 pounds per cubic foot (1.225 kg/m 3 ), [ 1 ] ) the dynamic pressure on the front of the car is 0.0555 pounds per square inch (3.83 hPa), about 0.38% of the static pressure (14.696 pounds per square inch (1,013.3 hPa) at sea level).
For an airliner cruising at 755 feet per second (828 km/h) at an altitude of 33,000 feet (10 km) (where the air density is about 0.0258 pounds per cubic foot (0.413 kg/m 3 )), the dynamic pressure on the front of the plane is 1.586 pounds per square inch (109.4 hPa), about 41% of the static pressure (3.84 pounds per square inch (265 hPa)).
For a launch of a space vehicle from the ground, dynamic pressure is:
During the launch, the vehicle speed increases but the air density decreases as the vehicle rises. Therefore, by Rolle's theorem , there is a point where the dynamic pressure is maximal.
In other words, before reaching max q , the dynamic pressure increase due to increasing velocity is greater than the dynamic pressure decrease due to decreasing air density such that the net dynamic pressure (opposing kinetic energy) acting on the craft continues to increase. After passing max q , the opposite is true. The net dynamic pressure acting against the craft decreases faster as the air density decreases with altitude than it increases from increasing velocity, ultimately reaching 0 when the air density becomes zero.
This value is significant, since it is one of the constraints that determines the structural load that the vehicle must bear. For many vehicles, if launched at full throttle, the aerodynamic forces would be higher than what they can withstand. For this reason, they are often throttled down before approaching max q and back up afterwards, so as to reduce the speed and hence the maximum dynamic pressure encountered along the flight.
During a normal Space Shuttle launch, for example, max q value of 0.32 atmospheres occurred at an altitude of approximately 11 km (36,000 ft), about one minute after launch. [ 2 ] The three Space Shuttle Main Engines were throttled back to about 65–72% of their rated thrust (depending on payload) as the dynamic pressure approached max q . [ 3 ] Combined with the propellant grain design of the solid rocket boosters , which reduced the thrust at max q by one third after 50 seconds of burn, the total stresses on the vehicle were kept to a safe level.
During a typical Apollo mission, the max q (also just over 0.3 atmospheres) occurred between 13 and 14 kilometres (43,000–46,000 ft) of altitude; [ 4 ] [ 5 ] approximately the same values occur for the SpaceX Falcon 9 . [ 6 ]
The point of max q is a key milestone during a space vehicle launch, as it is the point at which the airframe undergoes maximum mechanical stress. | https://en.wikipedia.org/wiki/Max_q |
Maxam–Gilbert sequencing is a method of DNA sequencing developed by Allan Maxam and Walter Gilbert in 1976–1977. This method is based on nucleobase -specific partial chemical modification of DNA and subsequent cleavage of the DNA backbone at sites adjacent to the modified nucleotides . [ 1 ]
Maxam–Gilbert sequencing was the first widely adopted method for DNA sequencing, and, along with the Sanger dideoxy method , represents the first generation of DNA sequencing methods. Maxam–Gilbert sequencing is no longer in widespread use, having been supplanted by next-generation sequencing methods.
Although Maxam and Gilbert published their chemical sequencing method two years after Frederick Sanger and Alan Coulson published their work on plus-minus sequencing, [ 2 ] [ 3 ] Maxam–Gilbert sequencing rapidly became more popular, since purified DNA could be used directly, while the initial Sanger method required that each read start be cloned for production of single-stranded DNA. However, with the improvement of the chain-termination method (see below), Maxam–Gilbert sequencing has fallen out of favour due to its technical complexity prohibiting its use in standard molecular biology kits, extensive use of hazardous chemicals, and difficulties with scale-up. [ 4 ]
Allan Maxam and Walter Gilbert’s 1977 paper “A new method for sequencing DNA” was honored by a Citation for Chemical Breakthrough Award from the Division of History of Chemistry of the American Chemical Society for 2017. It was presented to the Department of Molecular & Cellular Biology, Harvard University. [ 5 ]
Maxam–Gilbert sequencing requires radioactive labeling at one 5′ end of the DNA fragment to be sequenced (typically by a kinase reaction using gamma- 32 P ATP ) and purification of the DNA. Chemical treatment generates breaks at a small proportion of one or two of the four nucleotide bases in each of four reactions (G, A+G, C, C+T). For example, the purines (A+G) are depurinated using formic acid , the guanines (and to some extent the adenines ) are methylated by dimethyl sulfate , and the pyrimidines (C+T) are hydrolysed using hydrazine . The addition of salt ( sodium chloride ) to the hydrazine reaction inhibits the reaction of thymine for the C-only reaction. The modified DNAs may then be cleaved by hot piperidine ; (CH 2 ) 5 NH at the position of the modified base. The concentration of the modifying chemicals is controlled to introduce on average one modification per DNA molecule. Thus a series of labeled fragments is generated, from the radiolabeled end to the first "cut" site in each molecule.
The fragments in the four reactions are electrophoresed side by side in denaturing acrylamide gels for size separation. To visualize the fragments, the gel is exposed to X-ray film for autoradiography , yielding a series of dark bands each showing the location of identical radiolabeled DNA molecules. From presence and absence of certain fragments the sequence may be inferred. [ 1 ] [ 6 ]
This method led to the Methylation Interference Assay, used to map DNA-binding sites for DNA-binding proteins . [ 7 ]
An automated Maxam–Gilbert sequencing protocol was developed in 1994. [ 8 ] | https://en.wikipedia.org/wiki/Maxam–Gilbert_sequencing |
Maxim Nikolaevich Chernodub [ 2 ] (born June 7, 1973) is a French physicist of Ukrainian descent best known for his postulation of the magnetic-field-induced superconductivity of the vacuum .
Chernodub attended Lycée 145 in Kyiv from 1980 to 1990. He earned a bachelor's degree and a Master of Science, at the Moscow Institute of Physics and Technology , in 1993 and 1996, respectively, and a Ph.D. at the Institute for Theoretical and Experimental Physics (ITEP) in Moscow, in 1999. In 2007, there followed his habilitation at the ITEP. [ 1 ]
Chernodub worked for the ITEP (1994–2001, 2003–2006, 2007–2008) and for the Japanese Kanazawa (2001–2003) and Hiroshima University (2006–2007). Since 2008, he holds a permanent position as a researcher for the French National Centre for Scientific Research (CNRS), at the Laboratoire de mathématiques et physique théorique of the University of Tours . [ 1 ] He is also a visiting professor at the Department of Physics and Astronomy, Ghent University ( Belgium ; 2010–2012), [ 1 ] and a referee for the Natural Sciences and Engineering Research Council of Canada , the Russian Ministry of Education and Science , and the French National Agency for Research . [ 1 ]
Chernodub found, on the basis of the theory of quantum chromodynamics (QCD), that charged rho mesons [ 3 ] — charged virtual particles popping into and out of being in a vacuum — can linger long enough to become real in a magnetic field of 10 16 Tesla or more. [ 3 ] They share the same quantum state and form a condensate , flowing together as one particle. The condensed rho mesons may carry electric current without resistance along the magnetic field lines. [ 3 ] The internal magnetic fields of the particles align with the magnetic field around them, which causes a decrease of the total energy. [ 3 ]
Among several unusual properties of this postulated superconductivity of the vacuum is that it would, unlike previously known superconductivity, be expected to persist at temperatures of at least a billion, [ 4 ] perhaps billions of degrees. [ 5 ] Chernodub sees a possible explanation of his results in the quarks and antiquarks constituting the rho mesons being forced to move only along the magnetic field lines, which would render the rho mesons far more stable. [ 5 ] The effective mass of the rho mesons would be lowered to zero, enabling them to condense and move freely, due to an interaction of their spins with the external magnetic field. [ 5 ] The apparently strange situation that a current should flow without a carrier is explained by the fact that a vacuum is never truly empty. [ 5 ] [ 6 ]
In the realm of astrophysics, Chernodub's calculations could mean that periods of vacuum-superconductivity in the early days of the universe had caused the emergence of the large-scale magnetic fields out in space, which are so far mysterious. [ 4 ] [ 5 ] At present, magnetic fields of 10 16 T are by far not reached in the known universe. [ 5 ]
Chernodub believes that his prediction could be proven at the Large Hadron Collider (LHC) near Geneva or at the Relativistic Heavy Ion Collider (RHIC) of Brookhaven National Laboratory in Upton, New York. Ions colliding at these particle accelerators could create a magnetic field of almost the required strength in a "near miss", for perhaps one yoctosecond. Chernodub expects that vacuum superconductivity would, if it exists, leave a trace of charged rho mesons at the accelerators. [ 5 ] | https://en.wikipedia.org/wiki/Maxim_Chernodub |
In abstract algebra , particularly ring theory , maximal common divisors are an abstraction of the number theory concept of greatest common divisor (GCD). This definition is slightly more general than GCDs, and may exist in rings in which GCDs do not. Halter-Koch (1998) provides the following definition. [ 1 ]
d ∈ H {\displaystyle d\in H} is a maximal common divisor of a subset, B ⊂ H {\displaystyle B\subset H} , if the following criteria are met:
This abstract algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Maximal_common_divisor |
The maximal ergodic theorem is a theorem in ergodic theory , a discipline within mathematics .
Suppose that ( X , B , μ ) {\displaystyle (X,{\mathcal {B}},\mu )} is a probability space , that T : X → X {\displaystyle T:X\to X} is a (possibly noninvertible) measure-preserving transformation , and that f ∈ L 1 ( μ , R ) {\displaystyle f\in L^{1}(\mu ,\mathbb {R} )} . Define f ∗ {\displaystyle f^{*}} by
Then the maximal ergodic theorem states that
for any λ ∈ R .
This theorem is used to prove the point-wise ergodic theorem .
This probability -related article is a stub . You can help Wikipedia by expanding it .
This chaos theory -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Maximal_ergodic_theorem |
In statistics , the maximal information coefficient ( MIC ) is a measure of the strength of the linear or non-linear association between two variables X and Y .
The MIC belongs to the maximal information-based nonparametric exploration (MINE) class of statistics. [ 1 ] In a simulation study, MIC outperformed some selected low power tests, [ 1 ] however concerns have been raised regarding reduced statistical power in detecting some associations in settings with low sample size when compared to powerful methods such as distance correlation and Heller–Heller–Gorfine (HHG). [ 2 ] Comparisons with these methods, in which MIC was outperformed, were made in Simon and Tibshirani [ 3 ] and in Gorfine, Heller, and Heller. [ 4 ] It is claimed [ 1 ] that MIC approximately satisfies a property called equitability which is illustrated by selected simulation studies. [ 1 ] It was later proved that no non-trivial coefficient can exactly satisfy the equitability property as defined by Reshef et al., [ 1 ] [ 5 ] although this result has been challenged. [ 6 ] Some criticisms of MIC are addressed by Reshef et al. in further studies published on arXiv. [ 7 ]
The maximal information coefficient uses binning as a means to apply mutual information on continuous random variables. Binning has been used for some time as a way of applying mutual information to continuous distributions; what MIC contributes in addition is a methodology for selecting the number of bins and picking a maximum over many possible grids.
The rationale is that the bins for both variables should be chosen in such a way that the mutual information between the variables be maximal. That is achieved whenever H ( X b ) = H ( Y b ) = H ( X b , Y b ) {\displaystyle \mathrm {H} \left(X_{b}\right)=\mathrm {H} \left(Y_{b}\right)=\mathrm {H} \left(X_{b},Y_{b}\right)} . [ Note 1 ] Thus, when the mutual information is maximal over a binning of the data, we should expect that the following two properties hold, as much as made possible by the own nature of the data. First, the bins would have roughly the same size, because the entropies H ( X b ) {\displaystyle \mathrm {H} (X_{b})} and H ( Y b ) {\displaystyle \mathrm {H} (Y_{b})} are maximized by equal-sized binning. And second, each bin of X will roughly correspond to a bin in Y .
Because the variables X and Y are real numbers , it is almost always possible to create exactly one bin for each ( x , y ) datapoint, and that would yield a very high value of the MI. To avoid forming this kind of trivial partitioning, the authors of the paper propose taking a number of bins n x {\displaystyle n_{x}} for X and n y {\displaystyle n_{y}} whose product is relatively small compared with the size N of the data sample . Concretely, they propose:
n x × n y ≤ N 0.6 {\displaystyle n_{x}\times n_{y}\leq \mathrm {N} ^{0.6}}
In some cases it is possible to achieve a good correspondence between X b {\displaystyle X_{b}} and Y b {\displaystyle Y_{b}} with numbers as low as n x = 2 {\displaystyle n_{x}=2} and n y = 2 {\displaystyle n_{y}=2} , while in other cases the number of bins required may be higher. The maximum for I ( X b ; Y b ) {\displaystyle \mathrm {I} (X_{b};Y_{b})} is determined by H(X), which is in turn determined by the number of bins in each axis, therefore, the mutual information value will be dependent on the number of bins selected for each variable. In order to compare mutual information values obtained with partitions of different sizes, the mutual information value is normalized by dividing by the maximum achievable value for the given partition size. It is worth noting that a similar adaptive binning procedure for estimating mutual information had been proposed previously. [ 8 ] Entropy is maximized by uniform probability distributions, or in this case, bins with the same number of elements. Also, joint entropy is minimized by having a one-to-one correspondence between bins. If we substitute such values in the formula I ( X ; Y ) = H ( X ) + H ( Y ) − H ( X , Y ) {\displaystyle I(X;Y)=H(X)+H(Y)-H(X,Y)} , we can see that the maximum value achievable by the MI for a given pair n x , n y {\displaystyle n_{x},n_{y}} of bin counts is log min ( n x , n y ) {\displaystyle \log \min \left(n_{x},n_{y}\right)} . Thus, this value is used as a normalizing divisor for each pair of bin counts.
Last, the normalized maximal mutual information value for different combinations of n x {\displaystyle n_{x}} and n y {\displaystyle n_{y}} is tabulated, and the maximum value in the table selected as the value of the statistic.
It is important to note that trying all possible binning schemes that satisfy n x × n y ≤ N 0.6 {\displaystyle n_{x}\times n_{y}\leq \mathrm {N} ^{0.6}} is computationally unfeasible even for small n. Therefore, in practice the authors apply a heuristic which may or may not find the true maximum. | https://en.wikipedia.org/wiki/Maximal_information_coefficient |
In abstract algebra , a branch of mathematics , a maximal semilattice quotient is a commutative monoid derived from another commutative monoid by making certain elements equivalent to each other.
Every commutative monoid can be endowed with its algebraic preordering ≤ . By definition, x≤ y holds, if there exists z such that x+z=y . Further, for x, y in M , let x ∝ y {\displaystyle x\propto y} hold, if there exists a positive integer n such that x≤ ny , and let x ≍ y {\displaystyle x\asymp y} hold, if x ∝ y {\displaystyle x\propto y} and y ∝ x {\displaystyle y\propto x} . The binary relation ≍ {\displaystyle \asymp } is a monoid congruence of M , and the quotient monoid M / ≍ {\displaystyle M/{\asymp }} is the maximal semilattice quotient of M .
This terminology can be explained by the fact that the canonical projection p from M onto M / ≍ {\displaystyle M/{\asymp }} is universal among all monoid homomorphisms from M to a (∨,0)- semilattice , that is, for any (∨,0)-semilattice S and any monoid homomorphism f: M→ S , there exists a unique (∨,0)-homomorphism g : M / ≍ → S {\displaystyle g\colon M/{\asymp }\to S} such that f=gp .
If M is a refinement monoid , then M / ≍ {\displaystyle M/{\asymp }} is a distributive semilattice .
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Maximal_semilattice_quotient |
Maximally informative dimensions is a dimensionality reduction technique used in the statistical analyses of neural responses . Specifically, it is a way of projecting a stimulus onto a low-dimensional subspace so that as much information as possible about the stimulus is preserved in the neural response. It is motivated by the fact that natural stimuli are typically confined by their statistics to a lower-dimensional space than that spanned by white noise [ 1 ] but correctly identifying this subspace using traditional techniques is complicated by the correlations that exist within natural images. Within this subspace, stimulus-response functions may be either linear or nonlinear . The idea was originally developed by Tatyana Sharpee , Nicole C. Rust , and William Bialek in 2003. [ 2 ]
Neural stimulus-response functions are typically given as the probability of a neuron generating an action potential , or spike, in response to a stimulus s {\displaystyle \mathbf {s} } . The goal of maximally informative dimensions is to find a small relevant subspace of the much larger stimulus space that accurately captures the salient features of s {\displaystyle \mathbf {s} } . Let D {\displaystyle D} denote the dimensionality of the entire stimulus space and K {\displaystyle K} denote the dimensionality of the relevant subspace, such that K ≪ D {\displaystyle K\ll D} . We let { v K } {\displaystyle \{\mathbf {v} ^{K}\}} denote the basis of the relevant subspace, and s K {\displaystyle \mathbf {s} ^{K}} the projection of s {\displaystyle \mathbf {s} } onto { v K } {\displaystyle \{\mathbf {v} ^{K}\}} . Using Bayes' theorem we can write out the probability of a spike given a stimulus:
where
is some nonlinear function of the projected stimulus.
In order to choose the optimal { v K } {\displaystyle \{\mathbf {v} ^{K}\}} , we compare the prior stimulus distribution P ( s ) {\displaystyle P(\mathbf {s} )} with the spike-triggered stimulus distribution P ( s | s p i k e ) {\displaystyle P(\mathbf {s} |spike)} using the Shannon information . The average information (averaged across all presented stimuli) per spike is given by
Now consider a K = 1 {\displaystyle K=1} dimensional subspace defined by a single direction v {\displaystyle \mathbf {v} } . The average information conveyed by a single spike about the projection x = s ⋅ v {\displaystyle x=\mathbf {s} \cdot \mathbf {v} } is
where the probability distributions are approximated by a measured data set via P v ( x | s p i k e ) = ⟨ δ ( x − s ⋅ v ) | s p i k e ⟩ s {\displaystyle P_{\mathbf {v} }(x|spike)=\langle \delta (x-\mathbf {s} \cdot \mathbf {v} )|spike\rangle _{\mathbf {s} }} and P v ( x ) = ⟨ δ ( x − s ⋅ v ) ⟩ s {\displaystyle P_{\mathbf {v} }(x)=\langle \delta (x-\mathbf {s} \cdot \mathbf {v} )\rangle _{\mathbf {s} }} , i.e., each presented stimulus is represented by a scaled Dirac delta function and the probability distributions are created by averaging over all spike-eliciting stimuli, in the former case, or the entire presented stimulus set, in the latter case. For a given dataset, the average information is a function only of the direction v {\displaystyle \mathbf {v} } . Under this formulation, the relevant subspace of dimension K = 1 {\displaystyle K=1} would be defined by the direction v {\displaystyle \mathbf {v} } that maximizes the average information I ( v ) {\displaystyle I(\mathbf {v} )} .
This procedure can readily be extended to a relevant subspace of dimension K > 1 {\displaystyle K>1} by defining
and
and maximizing I ( v K ) {\displaystyle I({\mathbf {v} ^{K}})} .
Maximally informative dimensions does not make any assumptions about the Gaussianity of the stimulus set, which is important, because naturalistic stimuli tend to have non-Gaussian statistics. In this way the technique is more robust than other dimensionality reduction techniques such as spike-triggered covariance analyses. | https://en.wikipedia.org/wiki/Maximally_informative_dimensions |
In graph theory , a maximally matchable edge in a graph is an edge that is included in at least one maximum-cardinality matching in the graph. [ 1 ] An alternative term is allowed edge . [ 2 ] [ 3 ]
A fundamental problem in matching theory is: given a graph G , find the set of all maximally matchable edges in G. This is equivalent to finding the union of all maximum matchings in G (this is different than the simpler problem of finding a single maximum matching in G ). Several algorithms for this problem are known.
Consider a matchmaking agency with a pool of men and women. Given the preferences of the candidates, the agency constructs a bipartite graph where there is an edge between a man and a woman if they are compatible. The ultimate goal of the agency is to create as many compatible couples as possible, i.e., find a maximum-cardinality matching in this graph. Towards this goal, the agency first chooses an edge in the graph, and suggests to the man and woman on both ends of the edge to meet. Now, the agency must take care to only choose a maximally matchable edge. This is because, if it chooses a non-maximally matchable edge, it may get stuck with an edge that cannot be completed to a maximum-cardinality matching. [ 1 ]
Let G = ( V , E ) be a graph, where V are the vertices and E are the edges. A matching in G is a subset M of E , such that each vertex in V is adjacent to at most a single edge in M . A maximum matching is a matching of maximum cardinality.
An edge e in E is called maximally matchable (or allowed ) if there exists a maximum matching M that contains e .
Currently, the best known deterministic algorithm for general graphs runs in time O ( V E ) {\displaystyle O(VE)} . [ 2 ]
There is a randomized algorithm for general graphs in time O ~ ( V 2.376 ) {\displaystyle {\tilde {O}}(V^{2.376})} . [ 4 ]
In bipartite graphs, if a single maximum-cardinality matching is known, it is possible to find all maximally matchable edges in linear time - O ( V + E ) {\displaystyle O(V+E)} . [ 1 ]
If a maximum matching is not known, it can be found by existing algorithms. In this case, the resulting overall runtime is O ( V 1 / 2 E ) {\displaystyle O(V^{1/2}E)} for general bipartite graphs and O ( ( V / log V ) 1 / 2 E ) {\displaystyle O((V/\log V)^{1/2}E)} for dense bipartite graphs with E = Θ ( V 2 ) {\displaystyle E=\Theta (V^{2})} .
The algorithm for finding maximally matchable edges is simpler when the graph admits a perfect matching . [ 1 ] : sub.2.1
Let the bipartite graph be G = ( X + Y , E ) {\displaystyle G=(X+Y,E)} , where X = ( x 1 , … , x n ) {\displaystyle X=(x_{1},\ldots ,x_{n})} and Y = ( y 1 , … , y n ) {\displaystyle Y=(y_{1},\ldots ,y_{n})} . Let the perfect matching be M = { ( x 1 , y 1 ) , … , ( x n , y n ) } {\displaystyle M=\{(x_{1},y_{1}),\ldots ,(x_{n},y_{n})\}} .
Theorem: an edge e is maximally matchable if-and-only-if e is included in some M-alternating cycle - a cycle that alternates between edges in M and edges not in M . Proof :
Now, consider a directed graph H = ( Z , E ) {\displaystyle H=(Z,E)} , where Z = ( z 1 , … , z n ) {\displaystyle Z=(z_{1},\ldots ,z_{n})} and there is an edge from z i {\displaystyle z_{i}} to z j {\displaystyle z_{j}} in H iff i ≠ j {\displaystyle i\neq j} and there is an edge between x i {\displaystyle x_{i}} and y j {\displaystyle y_{j}} in G (note that by assumption such edges are not in M ). Each M -alternating cycle in G corresponds to a directed cycle in H . A directed edge belongs to a directed cycle iff both its endpoints belong to the same strongly connected component . There are algorithms for finding all strongly connected components in linear time. Therefore, the set of all maximally matchable edges can be found as follows:
Let the bipartite graph be G = ( X + Y , E ) {\displaystyle G=(X+Y,E)} , where X = ( x 1 , … , x n ) {\displaystyle X=(x_{1},\ldots ,x_{n})} and Y = ( y 1 , … , y n ′ ) {\displaystyle Y=(y_{1},\ldots ,y_{n'})} and n ≤ n ′ {\displaystyle n\leq n'} . Let the given maximum matching be M = { ( x 1 , y 1 ) , … , ( x t , y t ) } {\displaystyle M=\{(x_{1},y_{1}),\ldots ,(x_{t},y_{t})\}} , where t ≤ n ≤ n ′ {\displaystyle t\leq n\leq n'} . The edges in E can be categorized into two classes:
Theorem: All M {\displaystyle M} -lower edges are maximally matchable. [ 1 ] : sub.2.2 Proof : suppose e = ( x i , y j ) {\displaystyle e=(x_{i},y_{j})} where x i {\displaystyle x_{i}} is saturated and y i {\displaystyle y_{i}} is not. Then, removing e = ( x i , y j ) {\displaystyle e=(x_{i},y_{j})} from M {\displaystyle M} and adding e = ( x i , y j ) {\displaystyle e=(x_{i},y_{j})} yields a new maximum-cardinality matching.
Hence, it remains to find the maximally matchable edges among the M -upper ones.
Let H be the subgraph of G induced by the M -saturated nodes. Note that M is a perfect matching in H . Hence, using the algorithm of the previous subsection, it is possible to find all edges that are maximally matchable in H . Tassa [ 1 ] explains how to find the remaining maximally matchable edges, as well as how to dynamically update the set of maximally matchable edges when the graph changes. | https://en.wikipedia.org/wiki/Maximally_matchable_edge |
Maximilian Fichtner (born 1961 in Heidelberg, Germany) is professor for Solid State Chemistry at the Ulm University and executive director of the Helmholtz Institute Ulm for Electrochemical Energy Storage (HIU).
Fichtner was educated in Food Chemistry and Chemistry at the University Karlsruhe, now Karlsruhe Institute of Technology where he was awarded by the Diploma in Chemistry. In 1992 he received the Ph.D. in Chemistry/Surface Science with distinction and the Hermann Billing Award [ 1 ] for his thesis. In the thesis he developed a novel method for a spatially resolved speciation of beam-sensitive salts by SIMS. With the method he analysed the surface composition of atmospheric salt aerosol particles and contributed to the current climate model.
Following his PhD, Fichtner spent two years as a young researcher at the former Karlsruhe Nuclear Research Center (KfK) and developed his method further so that it could be applied to organic materials also. In 1994 he became assistant to the board of directors of the Karlsruhe Research Center ( FZK ), in the area Basic Research and New Technologies, with Herbert Gleiter as director. In 1997 he left to build up a new activity on microprocess engineering, with a focus on heterogeneous catalysis in microchannels, for fuel processing (methanol steam reforming , partial oxidation of methane) and synthesis of chemicals. The group was eventually integrated in the new Institute for Microprocess Engineering in 2001. In 2000 he was offered a position at the new Institute of Nanotechnology , INT [ 2 ] (Founding directors: Herbert Gleiter, Jean-Marie-Lehn, Dieter Fenske) to build up a new activity on nanoscale materials for energy storage. Since then he is group leader there. In 2012 he received a call by the Ulm University to become a professor (W3) in Solid State Chemistry, which he accepted in 2013. The position is connected to a function as group leader at the new Helmholtz Institute Ulm . Since 2015 he has been executive director of the institute.
Fichtner has co-ordinated several EU projects and collaborative projects from the German ministries of Economy and Research and Education. He has been organizer of various symposia at MRS and GRC conferences, and he was Chair of the GORDON Research Conference on Metal-Hydrogen Systems in 2013 [ 3 ] and of the 1st International Symposium on Magnesium Batteries (MagBatt) in 2016. [ 4 ]
In his career Fichtner worked on various topics, covering Theoretical Chemistry, Instrumental Analysis, Higher Administration, Chemical Engineering, Heterogeneous Catalysis, Hydrogen Storage, Electrochemistry and Battery Research.
Pioneering achievements were the first measurements of salts with Secondary Neutral Mass Spectrometry, [ 5 ] the development of a depth-resolved speciation of beam sensitive salts, a microstructure reactor which could safely burn and transfer the heat from a stoichiometric hydrogen-oxygen mixture to a thermo oil, thus demonstrating the enormous capability of running dangerous reactions in microstructure reactors safely. [ 6 ]
In the development of hydrogen storage materials, new complex hydride compounds were synthesized and investigated, [ 7 ] [ 8 ] the fasted charge and discharge of an aluminum hydride to date by a new Ti13 catalyst, first applied for that purpose by the Bogdanovig group of Max Planck Mülheim, was independently confirmed. [ 9 ] Further work in this area was focused on elucidating nanoscale effects in energy materials [ 10 ] [ 11 ] and studies, based on pioneering work since the late 1990s by various groups from all over the world on hydrogen and the effects by nanostructures, of the change of thermodynamic properties of complex hydrides was conducted in his group. [ 12 ]
In battery research, new synthesis methods were developed to stabilize conversion materials, [ 13 ] [ 14 ] new types of batteries based on anionic shuttles were presented [ 15 ] [ 16 ] and new electrolytes were developed for magnesium properties with outstanding voltage windows and non-nucleophilic properties, [ 17 ] making reversible Mg-S cells possible. Moreover, a new class of cathode materials is being studied with the highest packing densities for Li ions to date, the so-called Li-excess disordered rocksalt materials (DRX), developed by the Gerbrand Ceder group. [ 18 ] | https://en.wikipedia.org/wiki/Maximilian_Fichtner |
Maximilien Winter (1871–1935) was a French philosopher of mathematics.
In 1893 Winter helped Xavier Léon to found the Revue de métaphysique et de morale . After the First World War Winter ran the Supplément of the Revue until his death in 1935. [ 1 ]
This biography of a French philosopher is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Maximilien_Winter |
Maximization is a style of decision-making characterized by seeking the best option through an exhaustive search through alternatives. It is contrasted with satisficing , in which individuals evaluate options until they find one that is "good enough".
The distinction between "maximizing" and "satisficing" was first made by Herbert A. Simon in 1956. [ 1 ] [ 2 ] Simon noted that although fields like economics posited maximization or "optimizing" as the rational method of making decisions, humans often lack the cognitive resources or the environmental affordances to maximize. Simon instead formulated an approach known as bounded rationality , which he also referred to as satisficing. This approach was taken to be adaptive and, indeed, necessary, given our cognitive limitations. Thus, satisficing was taken to be a universal of human cognition.
Although Simon's work on bounded rationality was influential and can be seen as the origin of behavioral economics , the distinction between maximizing and satisficing gained new life 40 years later in psychology. Schwartz, Ward, Monterosso, Lyubomirsky, White, and Lehman (2002) defined maximization as an individual difference, arguing that some people were more likely than others to engage in a comprehensive search for the best option. [ 3 ] Thus, instead of conceptualizing satisficing as a universal principle of human cognitive abilities, Schwartz et al. demonstrated that some individuals were more likely than others to display this style of decision-making.
Based on the work of Schwartz et al. (2002), much of the literature on maximization has defined maximization as comprising three major components: [ 4 ]
Since these components were identified, the majority of the research on maximization has focused on which of these components are relevant (or most relevant) to the definition of maximizing. Researchers have variously argued that decision difficulty is irrelevant to defining maximizing, [ 5 ] that high standards is the only relevant component, [ 6 ] and that high standards is the only irrelevant component. [ 7 ] Many of these attempts to define maximizing have resulted in the creation of new psychological scales to measure the trait.
Recently, in a theoretical paper Cheek and Schwartz (2016) proposed a two-component model of maximization, defining maximization as the goal of choosing the best option, pursued by the strategy of searching exhaustively through alternatives. [ 8 ] Along similar lines, Hughes and Scholer (2017) proposed that researchers could differentiate between the goals and strategies of maximizers. However, they argued that the high standards goal is central to the definition of maximizing, but that some maximizers engage in adaptive or maladaptive strategies in order to pursue that goal. They showed that individuals with high standards could be distinguished by the use of the alternative search strategy, and that this strategy in particular predicted more negative emotions on a decision task. [ 9 ]
Initial research on maximizing showed uniformly negative outcomes associated with chronic maximizing tendencies. Such tendencies were associated with lower happiness, self-esteem , and life satisfaction ; [ 3 ] with greater depression and regret; [ 3 ] with lower satisfaction with choices; [ 10 ] [ 11 ] with greater perfectionism; [ 3 ] [ 12 ] and with greater decision-making confusion, commitment anxiety, and rumination. [ 13 ] One study by Iyengar, Wells, and Schwartz (2006) tracked job seekers and found that although maximizers were able to find jobs with starting salaries 20% higher than satisficers, they were less satisfied with both the job search process and the job they were about to start. [ 11 ] Thus, although maximizers were able to find objectively better options, they ended up subjectively worse off as a result.
However, as disagreement over the definition of maximizing grew, research began to show diverging effects: some negative, some neutral, and some positive. Diab, Gillespie, and Highhouse (2008), for example, contested that maximizing actually was not related to lower life satisfaction, and was not related to indecisiveness, avoidance, or neuroticism . [ 6 ] [ clarification needed ] Other studies showed maximizing to be associated with higher self-efficacy , optimism, and intrinsic motivation ; [ 5 ] and with higher life satisfaction and positive affect. [ 14 ]
Much of this disagreement can ultimately be ascribed to the different scales that were created to measure maximizing. But research on the three components mentioned above (high standards, alternative search, and decision difficulty) found that these components themselves predicted differing outcomes. High standards has generally shown little association with negative outcomes, and evidence of association with positive outcomes. [ 4 ] [ 7 ] [ 14 ] [ 15 ] [ 16 ] In contrast, alternative search and decision difficulty have shown much stronger associations with the negative outcomes listed above. Thus, the question of whether maximizing is adaptive or maladaptive may ultimately depend on which of these components one sees as essential to the definition of maximizing itself.
Limited research exists on other psychological constructs to which maximizing is related. However, several studies have shown maximizing to be associated with perfectionism , [ 12 ] [ 17 ] and Nenkov et al. (2008) qualified this relationship as being true primarily for the high standards component. [ 4 ] Some research has also linked maximizing to high need for cognition , again primarily with the high standards component. [ 4 ] [ 5 ] [ 16 ] Finally, research examining the association between maximizing and personality dimensions of the Big Five personality model have found high standards to be associated with high conscientiousness and decision difficulty with low conscientiousness. [ 18 ] Alternative search has also been associated with high neuroticism, and high standards has been associated with high openness to experience. [ 14 ]
Given the disagreement over the definition of maximizing, as well as attempts to increase the reliability of existing measures, several scales have been created to measure maximization. The list below identifies the name of the scale, as well as the components it measures:
Cheek and Schwartz (2016) [ 8 ] reviewed the literature on the measurement of maximization and proposed that researchers interested in studying individual differences in maximization should measure two constructs: the maximization goal and the maximization strategy. They recommended that researchers use the 7-item Maximizing Tendency Scale published by Dalal et al. (2015) to measure the maximization goal. They also tentatively recommended that researchers use the alternative search subscale of the Maximization Inventory, but noted that future research should continue to refine the measurement of the maximization strategy given psychometric concerns. | https://en.wikipedia.org/wiki/Maximization_(psychology) |
An estimation procedure that is often claimed to be part of Bayesian statistics is the maximum a posteriori ( MAP ) estimate of an unknown quantity, that equals the mode of the posterior density with respect to some reference measure, typically the Lebesgue measure . The MAP can be used to obtain a point estimate of an unobserved quantity on the basis of empirical data. It is closely related to the method of maximum likelihood (ML) estimation, but employs an augmented optimization objective which incorporates a prior density over the quantity one wants to estimate. MAP estimation is therefore a regularization of maximum likelihood estimation, so is not a well-defined statistic of the Bayesian posterior distribution.
Assume that we want to estimate an unobserved population parameter θ {\displaystyle \theta } on the basis of observations x {\displaystyle x} . Let f {\displaystyle f} be the sampling distribution of x {\displaystyle x} , so that f ( x ∣ θ ) {\displaystyle f(x\mid \theta )} is the probability of x {\displaystyle x} when the underlying population parameter is θ {\displaystyle \theta } . Then the function:
is known as the likelihood function and the estimate:
is the maximum likelihood estimate of θ {\displaystyle \theta } .
Now assume that a prior distribution g {\displaystyle g} over θ {\displaystyle \theta } exists. This allows us to treat θ {\displaystyle \theta } as a random variable as in Bayesian statistics . We can calculate the posterior density of θ {\displaystyle \theta } using Bayes' theorem :
where g {\displaystyle g} is density function of θ {\displaystyle \theta } , Θ {\displaystyle \Theta } is the domain of g {\displaystyle g} .
The method of maximum a posteriori estimation then estimates θ {\displaystyle \theta } as the mode of the posterior density of this random variable:
The denominator of the posterior density (the marginal likelihood of the model) is always positive and does not depend on θ {\displaystyle \theta } and therefore plays no role in the optimization. Observe that the MAP estimate of θ {\displaystyle \theta } coincides with the ML estimate when the prior g {\displaystyle g} is uniform (i.e., g {\displaystyle g} is a constant function ), which occurs whenever the prior distribution is taken as the reference measure, as is typical in function-space applications.
When the loss function is of the form
as c {\displaystyle c} goes to 0, the Bayes estimator approaches the MAP estimator, provided that the distribution of θ {\displaystyle \theta } is quasi-concave. [ 1 ] But generally a MAP estimator is not a Bayes estimator unless θ {\displaystyle \theta } is discrete .
MAP estimates can be computed in several ways:
While only mild conditions are required for MAP estimation to be a limiting case of Bayes estimation (under the 0–1 loss function), [ 1 ] it is not representative of Bayesian methods in general. This is because MAP estimates are point estimates, and depend on the arbitrary choice of reference measure, whereas Bayesian methods are characterized by the use of distributions to summarize data and draw inferences: thus, Bayesian methods tend to report the posterior mean or median instead, together with credible intervals . This is both because these estimators are optimal under squared-error and linear-error loss respectively—which are more representative of typical loss functions —and for a continuous posterior distribution there is no loss function which suggests the MAP is the optimal point estimator. In addition, the posterior density may often not have a simple analytic form: in this case, the distribution can be simulated using Markov chain Monte Carlo techniques, while optimization to find the mode(s) of the density may be difficult or impossible. [ citation needed ]
In many types of models, such as mixture models , the posterior may be multi-modal . In such a case, the usual recommendation is that one should choose the highest mode: this is not always feasible ( global optimization is a difficult problem), nor in some cases even possible (such as when identifiability issues arise). Furthermore, the highest mode may be uncharacteristic of the majority of the posterior, especially in many dimensions.
Finally, unlike ML estimators, the MAP estimate is not invariant under reparameterization. Switching from one parameterization to another involves introducing a Jacobian that impacts on the location of the maximum. [ 2 ] In contrast, Bayesian posterior expectations are invariant under reparameterization.
As an example of the difference between Bayes estimators mentioned above (mean and median estimators) and using a MAP estimate, consider the case where there is a need to classify inputs x {\displaystyle x} as either positive or negative (for example, loans as risky or safe). Suppose there are just three possible hypotheses about the correct method of classification h 1 {\displaystyle h_{1}} , h 2 {\displaystyle h_{2}} and h 3 {\displaystyle h_{3}} with posteriors 0.4, 0.3 and 0.3 respectively. Suppose given a new instance, x {\displaystyle x} , h 1 {\displaystyle h_{1}} classifies it as positive, whereas the other two classify it as negative. Using the MAP estimate for the correct classifier h 1 {\displaystyle h_{1}} , x {\displaystyle x} is classified as positive, whereas the Bayes estimators would average over all hypotheses and classify x {\displaystyle x} as negative.
Suppose that we are given a sequence ( x 1 , … , x n ) {\displaystyle (x_{1},\dots ,x_{n})} of IID N ( μ , σ v 2 ) {\displaystyle N(\mu ,\sigma _{v}^{2})} random variables and a prior distribution of μ {\displaystyle \mu } is given by N ( μ 0 , σ m 2 ) {\displaystyle N(\mu _{0},\sigma _{m}^{2})} . We wish to find the MAP estimate of μ {\displaystyle \mu } . Note that the normal distribution is its own conjugate prior , so we will be able to find a closed-form solution analytically.
The function to be maximized is then given by [ 3 ]
which is equivalent to minimizing the following function of μ {\displaystyle \mu } :
Thus, we see that the MAP estimator for μ is given by [ 3 ]
which turns out to be a linear interpolation between the prior mean and the sample mean weighted by their respective covariances.
The case of σ m → ∞ {\displaystyle \sigma _{m}\to \infty } is called a non-informative prior and leads to an improper probability distribution ; in this case μ ^ M A P → μ ^ M L E . {\displaystyle {\hat {\mu }}_{\mathrm {MAP} }\to {\hat {\mu }}_{\mathrm {MLE} }.} | https://en.wikipedia.org/wiki/Maximum_a_posteriori_estimation |
The maximum acceptable toxicant concentration (MATC) is a value that is calculated through aquatic toxicity tests to help set water quality regulations for the protection of aquatic life. Using the results of a partial life-cycle chronic toxicity test, the MATC is reported as the geometric mean between the No Observed Effect Concentration ( NOEC ) and the lowest observed effect concentration ( LOEC ). [ 1 ]
The MATC is used to set regulatory standards for priority pollutants [ 2 ] under the US federal Clean Water Act . Regulatory guidelines give two acceptable concentrations of pollutants to protect against effects: chronic or acute. Since the MATC should only be reported in chronic toxicity tests, there is a widely accepted method to convert the chronic MATC to a concentration that protects against acute effects. [ 1 ]
The MATC is calculated and reported from the results of a number of standard procedures designed by the United States Environmental Protection Agency (US EPA) and other organizations to maintain high accuracy and precision among all toxicity tests for regulatory purposes.
In a toxicity test, the NOEC and LOEC are derived as a comparison from the negative control , or the experimental group that does not contain the chemical in question. The NOEC is the highest concentration that does not cause a statistically different effect than the negative control through statistical hypothesis testing . Likewise, the LOEC is the lowest concentration tested that does cause a statistically different effect than the negative control. The MATC is the geometric mean between these two values, such that: MATC= √ (NOEC)(LOEC)
The MATC is calculated to protect against chronic effects on overall function or health of an organism, not death. A partial life cycle test must be used. This type of toxicity test uses organisms in their most sensitive life stages, usually during times of early reproduction and growth, but not juveniles. [ 3 ] The MATC is the highest concentration that should not cause chronic effects, however, for regulatory purposes, a maximum concentration to protect against acute effects must exist as well.
The MATC can be applied to the results of an acute toxicity test to obtain a concentration that would protect against adverse effects during an acute exposure. An LC 50 , or the concentration at which 50% of the organisms die during an acute toxicity test is used to derive a value called the acute to chronic ratio (ACR).
The MATC can be used to calculate the ACR as follows: A C R = L C 50 / M A T C {\displaystyle ACR=LC50/MATC}
The ACR is useful for estimating an MATC for species in which only acute toxicity data exists, or for setting regulatory guidelines for the protection of aquatic life through water quality criteria by the US EPA. [ 3 ]
The US EPA is the governmental organization responsible for writing and enforcing environmental regulations passed by Congress. The Clean Water Act was passed in 1972.
Section 304(a)(1) of the Clean Water Act is the Water Quality Criteria (WQC) developed for the protection of aquatic life and human health. [ 4 ] The MATC and ACR are used in a sequence of calculations to obtain the Criterion Maximum Concentration and Criterion Continuous Concentration (CMC and CCC, respectively) for the chemicals being regulated. [ 5 ]
The CMC and CCC are two of the six parts of the aquatic life criterion under the WQC, and are the actual regulatory values for all priority pollutants tested. The CMC is the highest concentration of a chemical in water that aquatic organisms can be exposed to acutely without causing an adverse effect. Likewise, the CCC is the highest concentration of a chemical in water that aquatic organisms can be exposed to indefinitely without resulting in an adverse effect. Typically, the CMC is higher than the CCC. [ 5 ]
Environment Canada is the regulatory agency for environmental protection in Canada. Under the Canada Water Act of 1970, the Canadian Water Quality Guidelines (CWQG) for the Protection of Aquatic Life give regulatory guidelines of maximum concentrations of pollutants that are acceptable in freshwater and marine environments. [ 6 ]
The CWQG's long and short-term exposure concentrations are derived in a similar way as the methods used by the US EPA, and are the CMC or CCC equivalents. [ 7 ] When an MATC is reported with toxicity tests, it has sometimes been called a threshold-observed-effect-concentration (TOEC). The MATC and TOEC are both calculated as the geometric mean of the NOEC and LOEC, and are often used interchangeably. [ 7 ] [ 8 ]
Standard methods are designed and used widely to maximize precision and accuracy for all toxicity tests. Values derived from toxicity tests such as the MATC are reported to regulatory organizations like the US EPA and Environment Canada so more confident regulations can be designed. [ citation needed ]
Some common standard methods include those designed by the governmental organizations like Environment Canada, the US EPA, or the United States Food and Drug Administration (FDA) . Others are designed by scientific organizations such as ASTM International , or the OECD .
Many of these methods use the same test organism or are designed for the same exposure time. Common test organisms include, but are not limited to, daphnia , fathead minnow , rainbow trout , and mussels . Acute toxicity tests are normally 24–96 hours, whereas chronic tests will typically run for a week or longer.
Using the MATC to derive regulatory guidelines has been accompanied with some debate. Hypothesis testing, or statistical tests performed with data sets that only report a significant difference, are not considered the most statistically robust. There are no confidence intervals to show a measure of uncertainty in a NOEC and LOEC. In addition, the NOEC and LOEC can only be concentrations in the test, and nothing in between. [ 9 ] Because of these reasons, values that are derived through curve fitting methods, such as an LC50, or EC10 (the concentration that causes the measured effect in 10% of organisms) would be preferred if it was possible more often.
From a regulatory standpoint, there are advantages to using results from hypothesis tests. NOEC and LOEC's were used more often in the past, and there are more test results reporting NOEC and LOEC's than EC10's. The time and effort required to perform all of the previous tests to derive a different value is not seen as a good use of resources. In addition, the use of NOEC and LOEC's allows for reporting of one number to regulatory agencies. [ 8 ] Water Quality Criteria are reported as one number that the actual concentration must remain below. If the MATC were reported as a range of values to account for uncertainty, the regulatory guidelines would not be presented as a single value.
[ 1 ]
[ 2 ]
[ 3 ]
[ 4 ]
[ 5 ]
[ 6 ]
. [ 7 ]
. [ 8 ]
. [ 9 ] | https://en.wikipedia.org/wiki/Maximum_acceptable_toxicant_concentration |
Maximum Allowable Operating Pressure ( MAOP ) is a pressure limit set, usually by a government body, which applies to compressed gas pressure vessels , pipelines , and storage tanks . For pipelines, this value is derived from Barlow's Formula , which takes into account wall thickness, diameter, allowable stress (which is a function of the material used), and a safety factor .
The MAOP is less than the MAWP ( maximum allowable working pressure ). MAWP is defined as the maximum pressure based on the design codes that the weakest component of a pressure vessel can handle. [ 1 ] Commonly standard wall thickness components are used in fabricating pressurized equipment, and hence are able to withstand pressures above their design pressure. The MAWP is the pressure stamped on the pressure equipment, and the pressure that must not be exceeded in operation.
Design pressure is the pressure a pressurized item is designed to, and is higher than any expected operating pressures. Due to the availability of standard wall thickness materials, many components will have a MAWP higher than the required design pressure. For pressure vessels, all pressures are defined as being at highest point of the unit in the operating position, and do not include static head pressure. [ 2 ] The equipment designer needs to account for the higher pressures occurring at some components due to static head pressure.
Relief valves are set at the design pressure of the pressurized item and sized to prevent the item under pressure from being over-pressurized. Depending on the design code that the pressurized item is designed, an over-pressure allowance can be used when sizing the relief valve. This is +10% for PD 5500, and ASME Section VIII div 1 & 2 (with an additional +10% allowance in ASME Section VIII for a fire relief case). ASME has different criteria for steam boilers .
Maximum expected operating pressure ( MEOP ) is the highest expected operating pressure, which is synonymous with maximum operating pressure (MOP). [ 3 ]
This engineering-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Maximum_allowable_operating_pressure |
A maximum clade credibility tree is a tree that summarises the results of a Bayesian phylogenetic inference . Whereas a majority-rule tree combines the most common clades , and usually yields a tree that wasn't sampled in the analysis, the maximum-credibility method evaluates each of the sampled posterior trees. Each clade within the tree is given a score based on the fraction of times that it appears in the set of sampled posterior trees, and the product of these scores are taken as the tree's score. The tree with the highest score is then the maximum clade credibility tree. [ 1 ] | https://en.wikipedia.org/wiki/Maximum_clade_credibility_tree |
In graph theory and theoretical computer science , a maximum common induced subgraph of two graphs G and H is a graph that is an induced subgraph of both G and H ,
and that has as many vertices as possible.
Finding this graph is NP-hard .
In the associated decision problem , the input is two graphs G and H and a number k . The problem is to decide whether G and H have a common induced subgraph with at least k vertices. This problem is NP-complete . [ 1 ] It is a generalization of the induced subgraph isomorphism problem , which arises when k equals the number of vertices in the smaller of G and H , so that this entire graph must appear as an induced subgraph of the other graph.
Based on hardness of approximation results for the maximum independent set problem, the maximum common induced subgraph problem is also hard to approximate. [ 2 ] This implies that, unless P = NP , there is no approximation algorithm that, in polynomial time on n {\displaystyle n} -vertex graphs, always finds a solution within a factor of n 1 − ϵ {\displaystyle n^{1-\epsilon }} of optimal, for any ϵ > 0 {\displaystyle \epsilon >0} . [ 3 ]
One possible solution for this problem is to build a modular product graph of G and H .
In this graph, the largest clique corresponds to a maximum common induced subgraph of G and H . Therefore, algorithms for finding maximum cliques can be used to find the maximum common induced subgraph. [ 4 ] Moreover, a modified maximum-clique algorithm can be used to find a maximum common connected subgraph. [ 5 ]
The McSplit algorithm (along with its McSplit↓ variant) is a forward checking algorithm that does not use the clique encoding, but uses a compact data structure to keep track of the vertices in graph H to which each vertex in graph G may be mapped. Both versions of the McSplit algorithm outperform the clique encoding for many graph classes. [ 6 ] A more efficient implementation of McSplit is McSplitDAL+PR, which combines a Reinforcement Learning agent with some heuristic scores computed with the PageRank algorithm. [ 7 ]
Maximum common induced subgraph algorithms form the basis for both graph differencing and graph alignment. Graph differencing identifies and highlights differences between two graphs by pinpointing changes, additions, or deletions. Graph alignment involves finding correspondences between the vertices and edges of two graphs to identify similar structures.
Maximum common induced subgraph algorithms have a long tradition in bioinformatics , cheminformatics , [ 8 ] [ 9 ] pharmacophore mapping , [ 10 ] pattern recognition , [ 11 ] computer vision , code analysis, compilers, and model checking .
The problem is also particularly useful in software engineering and model-based systems engineering , where software code and engineering models (e.g., Simulink , UML diagrams ) are represented as graph data structures. Graph differencing can be used to detect changes between different versions of software code and models for change auditing, debugging, version control and collaborative team development. | https://en.wikipedia.org/wiki/Maximum_common_induced_subgraph |
The maximum density of a substance is the highest attainable density of the substance under given conditions.
Almost all known substances undergo thermal expansion in response to heating, meaning that a given mass of substance contracts to a low volume at low temperatures , when little thermal energy is present. Substances, especially fluids in which intermolecular forces are weak, also undergo compression upon the application of pressure . Nearly all substances therefore reach a density maximum at very low temperatures and very high pressures, characteristic properties of the solid state of matter .
An especially notable irregular maximum density is that of water , which reaches a density peak at 4 °C (39 °F). This has important ramifications in Earth's ecosystem . [ 1 ]
This physical chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Maximum_density |
In magnetics , the maximum energy product is an important figure-of-merit for the strength of a permanent magnet material. It is often denoted ( BH ) max and is typically given in units of either kJ/m 3 (kilojoules per cubic meter, in SI electromagnetism) or MGOe (mega- gauss - oersted , in gaussian electromagnetism ). [ 1 ] [ 2 ] 1 MGOe is equivalent to 7.958 kJ/m 3 . [ 3 ]
During the 20th century, the maximum energy product of commercially available magnetic materials rose from around 1 MGOe (e.g. in KS Steel ) to over 50 MGOe (in neodymium magnets ). [ 4 ] Other important permanent magnet properties include the remanence ( B r ) and coercivity ( H c ); these quantities are also determined from the saturation loop and are related to the maximum energy product, though not directly.
The maximum energy product is defined based on the magnetic hysteresis saturation loop ( B - H curve), in the demagnetizing portion where the B and H fields are in opposition. It is defined as the maximal value of the product of B and H along this curve (actually, the maximum of the negative of the product, − BH , since they have opposing signs):
Equivalently, it can be graphically defined as the area of the largest rectangle that can be drawn between the origin and the saturation demagnetization B-H curve (see figure).
The significance of ( BH ) max is that the volume of magnet necessary for any given application tends to be inversely proportional to ( BH ) max . This is illustrated by considering a simple magnetic circuit containing a permanent magnet of volume Vol mag and an air gap of volume Vol gap , connected to each other by a magnetic core . Suppose the goal is to reach a certain field strength B gap in the gap. In such a situation, the total magnetic energy in the gap (volume-integrated magnetic energy density) is directly equal to half the volume-integrated − BH in the magnet: [ 5 ]
thus in order to achieve the desired magnetic field in the gap, the required volume of magnet can be minimized by maximizing − BH in the magnet. By choosing a magnetic material with a high ( BH ) max , and also choosing the aspect ratio of the magnet so that its − BH is equal to ( BH ) max , the required volume of magnet to achieve a target flux density in the air gap is minimized. This expression assumes that the permeability in the core that is connecting the magnetic material to the air gap is infinite, so unlike the equation might imply, you cannot get arbitrarily large flux density in the air gap by decreasing the gap distance. A real core will eventually saturate. | https://en.wikipedia.org/wiki/Maximum_energy_product |
In statistics and information theory , a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions . According to the principle of maximum entropy , if nothing is known about a distribution except that it belongs to a certain class (usually defined in terms of specified properties or measures), then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time.
If X {\displaystyle X} is a continuous random variable with probability density p ( x ) {\displaystyle p(x)} , then the differential entropy of X {\displaystyle X} is defined as [ 1 ] [ 2 ] [ 3 ]
H ( X ) = − ∫ − ∞ ∞ p ( x ) log p ( x ) d x . {\displaystyle H(X)=-\int _{-\infty }^{\infty }p(x)\log p(x)\,dx~.}
If X {\displaystyle X} is a discrete random variable with distribution given by Pr ( X = x k ) = p k for k = 1 , 2 , … {\displaystyle \Pr(X{=}x_{k})=p_{k}\qquad {\text{ for }}\quad k=1,2,\ldots } then the entropy of X {\displaystyle X} is defined as H ( X ) = − ∑ k ≥ 1 p k log p k . {\displaystyle H(X)=-\sum _{k\geq 1}p_{k}\log p_{k}\,.}
The seemingly divergent term p ( x ) log p ( x ) {\displaystyle p(x)\log p(x)} is replaced by zero, whenever p ( x ) = 0 . {\displaystyle p(x)=0\,.}
This is a special case of more general forms described in the articles Entropy (information theory) , Principle of maximum entropy , and differential entropy. In connection with maximum entropy distributions, this is the only one needed, because maximizing H ( X ) {\displaystyle H(X)} will also maximize the more general forms.
The base of the logarithm is not important, as long as the same one is used consistently: Change of base merely results in a rescaling of the entropy. Information theorists may prefer to use base 2 in order to express the entropy in bits ; mathematicians and physicists often prefer the natural logarithm , resulting in a unit of "nat"s for the entropy.
However, the chosen measure d x {\displaystyle dx} is crucial, even though the typical use of the Lebesgue measure is often defended as a "natural" choice: Which measure is chosen determines the entropy and the consequent maximum entropy distribution.
Many statistical distributions of applicable interest are those for which the moments or other measurable quantities are constrained to be constants. The following theorem by Ludwig Boltzmann gives the form of the probability density under these constraints.
Suppose S {\displaystyle S} is a continuous, closed subset of the real numbers R {\displaystyle \mathbb {R} } and we choose to specify n {\displaystyle n} measurable functions f 1 , … , f n {\displaystyle f_{1},\ldots ,f_{n}} and n {\displaystyle n} numbers a 1 , … , a n . {\displaystyle a_{1},\ldots ,a_{n}.} We consider the class C {\displaystyle C} of all real-valued random variables which are supported on S {\displaystyle S} (i.e. whose density function is zero outside of S {\displaystyle S} ) and which satisfy the n {\displaystyle n} moment conditions:
E [ f j ( X ) ] ≥ a j for j = 1 , … , n {\displaystyle \operatorname {E} [f_{j}(X)]\geq a_{j}\qquad {\text{for }}\quad j=1,\ldots ,n}
If there is a member in C {\displaystyle C} whose density function is positive everywhere in S , {\displaystyle S,} and if there exists a maximal entropy distribution for C , {\displaystyle C,} then its probability density p ( x ) {\displaystyle p(x)} has the following form:
p ( x ) = exp ( ∑ j = 0 n λ j f j ( x ) ) for all x ∈ S {\displaystyle p(x)=\exp \left(\sum _{j=0}^{n}\lambda _{j}f_{j}(x)\right)\qquad {\text{ for all }}~x\in S}
where we assume that f 0 ( x ) = 1 . {\displaystyle f_{0}(x)=1\,.} The constant λ 0 {\displaystyle \lambda _{0}} and the n {\displaystyle n} Lagrange multipliers λ = ( λ 1 , … , λ n ) {\displaystyle {\boldsymbol {\lambda }}=(\lambda _{1},\ldots ,\lambda _{n})} solve the constrained optimization problem with a 0 = 1 {\displaystyle a_{0}=1} (which ensures that p {\displaystyle p} integrates to unity): [ 4 ]
max λ 0 ; λ { ∑ j = 0 n λ j a j − ∫ exp ( ∑ j = 0 n λ j f j ( x ) ) d x } subject to λ ≥ 0 {\displaystyle \max _{\lambda _{0};\,{\boldsymbol {\lambda }}}\left\{\sum _{j=0}^{n}\lambda _{j}a_{j}-\int \exp \left(\sum _{j=0}^{n}\lambda _{j}f_{j}(x)\right)dx\right\}\qquad ~{\text{ subject to }}~{\boldsymbol {\lambda }}\geq \mathbf {0} }
Using the Karush–Kuhn–Tucker conditions , it can be shown that the optimization problem has a unique solution because the objective function in the optimization is concave in λ . {\displaystyle {\boldsymbol {\lambda }}\,.}
Note that when the moment constraints are equalities (instead of inequalities), that is,
E [ f j ( X ) ] = a j for j = 1 , … , n , {\displaystyle \operatorname {E} [f_{j}(X)]=a_{j}\qquad {\text{ for }}~j=1,\ldots ,n\,,}
then the constraint condition λ ≥ 0 {\displaystyle {\boldsymbol {\lambda }}\geq \mathbf {0} } can be dropped, which makes optimization over the Lagrange multipliers unconstrained.
Suppose S = { x 1 , x 2 , … } {\displaystyle S=\{x_{1},x_{2},\ldots \}} is a (finite or infinite) discrete subset of the reals, and that we choose to specify n {\displaystyle n} functions f 1 , … , f n {\displaystyle f_{1},\ldots ,f_{n}} and n {\displaystyle n} numbers a 1 , … , a n . {\displaystyle a_{1},\ldots ,a_{n}\,.} We consider the class C {\displaystyle C} of all discrete random variables X {\displaystyle X} which are supported on S {\displaystyle S} and which satisfy the n {\displaystyle n} moment conditions
E [ f j ( X ) ] ≥ a j for j = 1 , … , n {\displaystyle \operatorname {E} [f_{j}(X)]\geq a_{j}\qquad ~{\text{ for }}~j=1,\ldots ,n}
If there exists a member of class C {\displaystyle C} which assigns positive probability to all members of S {\displaystyle S} and if there exists a maximum entropy distribution for C , {\displaystyle C,} then this distribution has the following shape:
Pr ( X = x k ) = exp ( ∑ j = 0 n λ j f j ( x k ) ) for k = 1 , 2 , … {\displaystyle \Pr(X{=}x_{k})=\exp \left(\sum _{j=0}^{n}\lambda _{j}f_{j}(x_{k})\right)\qquad {\text{ for }}~k=1,2,\ldots }
where we assume that f 0 = 1 {\displaystyle f_{0}=1} and the constants λ 0 , λ ≡ ( λ 1 , … , λ n ) {\displaystyle \lambda _{0},\,{\boldsymbol {\lambda }}\equiv (\lambda _{1},\ldots ,\lambda _{n})} solve the constrained optimization problem with a 0 = 1 {\displaystyle a_{0}=1} : [ 5 ]
max λ 0 ; λ { ∑ j = 0 n λ j a j − ∑ k ≥ 1 exp ( ∑ j = 0 n λ j f j ( x k ) ) } for which λ ≥ 0 {\displaystyle \max _{\lambda _{0};\,{\boldsymbol {\lambda }}}\left\{\sum _{j=0}^{n}\lambda _{j}a_{j}-\sum _{k\geq 1}\exp \left(\sum _{j=0}^{n}\lambda _{j}f_{j}(x_{k})\right)\right\}\qquad {\text{ for which }}~{\boldsymbol {\lambda }}\geq \mathbf {0} }
Again as above, if the moment conditions are equalities (instead of inequalities), then the constraint condition λ ≥ 0 {\displaystyle {\boldsymbol {\lambda }}\geq \mathbf {0} } is not present in the optimization.
In the case of equality constraints, this theorem is proved with the calculus of variations and Lagrange multipliers . The constraints can be written as
∫ − ∞ ∞ f j ( x ) p ( x ) d x = a j {\displaystyle \int _{-\infty }^{\infty }f_{j}(x)p(x)\,dx=a_{j}}
We consider the functional
J ( p ) = ∫ − ∞ ∞ p ( x ) ln p ( x ) d x − η 0 ( ∫ − ∞ ∞ p ( x ) d x − 1 ) − ∑ j = 1 n λ j ( ∫ − ∞ ∞ f j ( x ) p ( x ) d x − a j ) {\displaystyle J(p)=\int _{-\infty }^{\infty }p(x)\ln {p(x)}\,dx-\eta _{0}\left(\int _{-\infty }^{\infty }p(x)\,dx-1\right)-\sum _{j=1}^{n}\lambda _{j}\left(\int _{-\infty }^{\infty }f_{j}(x)p(x)\,dx-a_{j}\right)}
where η 0 {\displaystyle \eta _{0}} and λ j , j ≥ 1 {\displaystyle \lambda _{j},j\geq 1} are the Lagrange multipliers. The zeroth constraint ensures the second axiom of probability . The other constraints are that the measurements of the function are given constants up to order n {\displaystyle n} . The entropy attains an extremum when the functional derivative is equal to zero:
δ J ( p ) δ p = ln p ( x ) + 1 − η 0 − ∑ j = 1 n λ j f j ( x ) = 0 {\displaystyle {\frac {\delta J(p)}{\delta p}}=\ln {p(x)}+1-\eta _{0}-\sum _{j=1}^{n}\lambda _{j}f_{j}(x)=0}
Therefore, the extremal entropy probability distribution in this case must be of the form ( λ 0 := η 0 − 1 {\displaystyle \lambda _{0}:=\eta _{0}-1} ),
p ( x ) = e − 1 + η 0 e ∑ j = 1 n λ j f j ( x ) = exp ( ∑ j = 0 n λ j f j ( x ) ) , {\displaystyle p(x)=e^{-1+\eta _{0}}\,e^{\sum _{j=1}^{n}\lambda _{j}f_{j}(x)}=\exp \left(\sum _{j=0}^{n}\lambda _{j}f_{j}(x)\right),}
remembering that f 0 ( x ) = 1 {\displaystyle f_{0}(x)=1} . It can be verified that this is the maximal solution by checking that the variation around this solution is always negative.
Suppose p {\displaystyle p} and p ′ {\displaystyle p'} are distributions satisfying the expectation-constraints. Letting α ∈ ( 0 , 1 ) {\displaystyle \alpha \in (0,1)} and considering the distribution q = α p + ( 1 − α ) p ′ {\displaystyle q=\alpha \,p+(1-\alpha )\,p'} it is clear that this distribution satisfies the expectation-constraints and furthermore has as support supp ( q ) = supp ( p ) ∪ supp ( p ′ ) . {\displaystyle \operatorname {supp} (q)=\operatorname {supp} (p)\cup \operatorname {supp} (p')\,.} From basic facts about entropy, it holds that H ( q ) ≥ α H ( p ) + ( 1 − α ) H ( p ′ ) . {\displaystyle {\mathcal {H}}(q)\geq \alpha \,{\mathcal {H}}(p)+(1-\alpha )\,{\mathcal {H}}(p').} Taking limits α → 1 {\displaystyle \alpha \to 1} and α → 0 , {\displaystyle \alpha \to 0\,,} respectively, yields H ( q ) ≥ H ( p ) , H ( p ′ ) . {\displaystyle {\mathcal {H}}(q)\geq {\mathcal {H}}(p),{\mathcal {H}}(p')\,.}
It follows that a distribution satisfying the expectation-constraints and maximising entropy must necessarily have full support — i. e. the distribution is almost everywhere strictly positive. It follows that the maximising distribution must be an internal point in the space of distributions satisfying the expectation-constraints, that is, it must be a local extreme. Thus it suffices to show that the local extreme is unique, in order to show both that the entropy-maximising distribution is unique (and this also shows that the local extreme is the global maximum).
Suppose p {\displaystyle p} and p ′ {\displaystyle p'} are local extremes. Reformulating the above computations these are characterised by parameters λ , λ ′ ∈ R n {\displaystyle {\boldsymbol {\lambda }},\,{\boldsymbol {\lambda }}'\in \mathbb {R} ^{n}} via p ( x ) = exp ⟨ λ , f ( x ) ⟩ / C ( λ ) {\displaystyle p(x)={\exp \left\langle {\boldsymbol {\lambda }},\mathbf {f} (x)\right\rangle }/{C({\boldsymbol {\lambda }})}} and similarly for p ′ , {\displaystyle p',} where C ( λ ) = ∫ R exp ⟨ λ , f ( x ) ⟩ d x . {\textstyle C({\boldsymbol {\lambda }})=\int _{\mathbb {R} }\exp \left\langle {\boldsymbol {\lambda }},\mathbf {f} (x)\right\rangle \,dx\,.} We now note a series of identities: Via the satisfaction of the expectation-constraints and utilising gradients / directional derivatives, one has
D log C ( ⋅ ) | λ = D C ( ⋅ ) C ( ⋅ ) | λ = E p [ f ( X ) ] = a {\displaystyle {\left.D\log C(\cdot )\right|}_{\boldsymbol {\lambda }}={\left.{\tfrac {DC(\cdot )}{C(\cdot )}}\right|}_{\boldsymbol {\lambda }}=\operatorname {E} _{p}\left[\mathbf {f} (X)\right]=\mathbf {a} } and similarly for λ ′ . {\displaystyle {\boldsymbol {\lambda }}'~.} Letting u = λ ′ − λ ∈ R n {\displaystyle u={\boldsymbol {\lambda }}'-{\boldsymbol {\lambda }}\in \mathbb {R} ^{n}} one obtains:
0 = ⟨ u , a − a ⟩ = D u log C ( ⋅ ) | λ ′ − D u log C ( ⋅ ) | λ = D u 2 log C ( ⋅ ) | γ {\displaystyle 0=\left\langle u,\mathbf {a} -\mathbf {a} \right\rangle ={\left.D_{u}\log C(\cdot )\right|}_{{\boldsymbol {\lambda }}'}-{\left.D_{u}\log C(\cdot )\right|}_{\boldsymbol {\lambda }}={\left.D_{u}^{2}\log C(\cdot )\right|}_{\boldsymbol {\gamma }}}
where γ = θ λ + ( 1 − θ ) λ ′ {\displaystyle {\boldsymbol {\gamma }}=\theta {\boldsymbol {\lambda }}+(1-\theta ){\boldsymbol {\lambda }}'} for some θ ∈ ( 0 , 1 ) . {\displaystyle \theta \in (0,1).} Computing further, one has
0 = D u 2 log C ( ⋅ ) | γ = D u ( D u C ( ⋅ ) C ( ⋅ ) ) | γ = D u 2 C ( ⋅ ) C ( ⋅ ) | γ − ( D u C ( ⋅ ) ) 2 C ( ⋅ ) 2 | γ = E q [ ⟨ u , f ( X ) ⟩ 2 ] − ( E q [ ⟨ u , f ( X ) ⟩ ] ) 2 = Var q [ ⟨ u , f ( X ) ⟩ ] {\displaystyle {\begin{aligned}0&={\left.D_{u}^{2}\log C(\cdot )\right|}_{\boldsymbol {\gamma }}\\[1ex]&={\left.D_{u}\left({\frac {D_{u}C(\cdot )}{C(\cdot )}}\right)\right|}_{\boldsymbol {\gamma }}={\left.{\frac {D_{u}^{2}C(\cdot )}{C(\cdot )}}\right|}_{\boldsymbol {\gamma }}-{\left.{\frac {{\left(D_{u}C(\cdot )\right)}^{2}}{C(\cdot )^{2}}}\right|}_{\boldsymbol {\gamma }}\\[1ex]&=\operatorname {E} _{q}\left[{\left\langle u,\mathbf {f} (X)\right\rangle }^{2}\right]-{\left(\operatorname {E} _{q}\left[\left\langle u,\mathbf {f} (X)\right\rangle \right]\right)}^{2}\\[2ex]&=\operatorname {Var} _{q}\left[\left\langle u,\mathbf {f} (X)\right\rangle \right]\end{aligned}}}
where q {\displaystyle q} is similar to the distribution above, only parameterised by γ , {\displaystyle {\boldsymbol {\gamma }}~,} Assuming that no non-trivial linear combination of the observables is almost everywhere (a.e.) constant, (which e.g. holds if the observables are independent and not a.e. constant), it holds that ⟨ u , f ( X ) ⟩ {\displaystyle \langle u,\mathbf {f} (X)\rangle } has non-zero variance, unless u = 0 . {\displaystyle u=0~.} By the above equation it is thus clear, that the latter must be the case. Hence λ ′ − λ = u = 0 , {\displaystyle {\boldsymbol {\lambda }}'-{\boldsymbol {\lambda }}=u=0\,,} so the parameters characterising the local extrema p , p ′ {\displaystyle p,\,p'} are identical, which means that the distributions themselves are identical. Thus, the local extreme is unique and by the above discussion, the maximum is unique – provided a local extreme actually exists.
Note that not all classes of distributions contain a maximum entropy distribution. It is possible that a class contain distributions of arbitrarily large entropy (e.g. the class of all continuous distributions on R with mean 0 but arbitrary standard deviation), or that the entropies are bounded above but there is no distribution which attains the maximal entropy. [ a ] It is also possible that the expected value restrictions for the class C force the probability distribution to be zero in certain subsets of S . In that case our theorem doesn't apply, but one can work around this by shrinking the set S .
Every probability distribution is trivially a maximum entropy probability distribution under the constraint that the distribution has its own entropy. To see this, rewrite the density as p ( x ) = exp ( ln p ( x ) ) {\displaystyle p(x)=\exp {(\ln {p(x)})}} and compare to the expression of the theorem above. By choosing ln p ( x ) → f ( x ) {\displaystyle \ln {p(x)}\rightarrow f(x)} to be the measurable function and
∫ exp ( f ( x ) ) f ( x ) d x = − H {\displaystyle \int \exp {(f(x))}f(x)dx=-H}
to be the constant, p ( x ) {\displaystyle p(x)} is the maximum entropy probability distribution under the constraint
∫ p ( x ) f ( x ) d x = − H . {\displaystyle \int p(x)f(x)\,dx=-H.}
Nontrivial examples are distributions that are subject to multiple constraints that are different from the assignment of the entropy. These are often found by starting with the same procedure ln p ( x ) → f ( x ) {\displaystyle \ln {p(x)}\to f(x)} and finding that f ( x ) {\displaystyle f(x)} can be separated into parts.
A table of examples of maximum entropy distributions is given in Lisman (1972) [ 6 ] and Park & Bera (2009). [ 7 ]
The uniform distribution on the interval [ a , b ] is the maximum entropy distribution among all continuous distributions which are supported in the interval [ a , b ], and thus the probability density is 0 outside of the interval. This uniform density can be related to Laplace's principle of indifference , sometimes called the principle of insufficient reason. More generally, if we are given a subdivision a = a 0 < a 1 < ... < a k = b of the interval [ a , b ] and probabilities p 1 ,..., p k that add up to one, then we can consider the class of all continuous distributions such that Pr ( a j − 1 ≤ X < a j ) = p j for j = 1 , … , k {\displaystyle \Pr(a_{j-1}\leq X<a_{j})=p_{j}\quad {\text{ for }}j=1,\ldots ,k} The density of the maximum entropy distribution for this class is constant on each of the intervals [ a j −1 , a j ). The uniform distribution on the finite set { x 1 ,..., x n } (which assigns a probability of 1/ n to each of these values) is the maximum entropy distribution among all discrete distributions supported on this set.
The exponential distribution , for which the density function is
p ( x | λ ) = { λ e − λ x x ≥ 0 , 0 x < 0 , {\displaystyle p(x|\lambda )={\begin{cases}\lambda e^{-\lambda x}&x\geq 0,\\0&x<0,\end{cases}}}
is the maximum entropy distribution among all continuous distributions supported in [0,∞) that have a specified mean of 1/λ.
In the case of distributions supported on [0,∞), the maximum entropy distribution depends on relationships between the first and second moments. In specific cases, it may be the exponential distribution, or may be another distribution, or may be undefinable. [ 8 ]
The normal distribution N ( μ , σ 2 ), for which the density function is
p ( x | μ , σ ) = 1 σ 2 π e − ( x − μ ) 2 2 σ 2 , {\displaystyle p(x|\mu ,\sigma )={\frac {1}{\sigma {\sqrt {2\pi }}}}e^{-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}},}
has maximum entropy among all real -valued distributions supported on (−∞,∞) with a specified variance σ 2 (a particular moment ). The same is true when the mean μ and the variance σ 2 is specified (the first two moments), since entropy is translation invariant on (−∞,∞). Therefore, the assumption of normality imposes the minimal prior structural constraint beyond these moments. (See the differential entropy article for a derivation.)
Among all the discrete distributions supported on the set { x 1 ,..., x n } with a specified mean μ, the maximum entropy distribution has the following shape: Pr ( X = x k ) = C r x k for k = 1 , … , n {\displaystyle \Pr(X{=}x_{k})=Cr^{x_{k}}\quad {\text{ for }}k=1,\ldots ,n} where the positive constants C and r can be determined by the requirements that the sum of all the probabilities must be 1 and the expected value must be μ.
For example, if a large number N of dice are thrown, and you are told that the sum of all the shown numbers is S . Based on this information alone, what would be a reasonable assumption for the number of dice showing 1, 2, ..., 6? This is an instance of the situation considered above, with { x 1 ,..., x 6 } = {1,...,6} and μ = S / N .
Finally, among all the discrete distributions supported on the infinite set { x 1 , x 2 , . . . } {\displaystyle \{x_{1},x_{2},...\}} with mean μ , the maximum entropy distribution has the shape: Pr ( X = x k ) = C r x k for k = 1 , 2 , … , {\displaystyle \Pr(X{=}x_{k})=Cr^{x_{k}}\quad {\text{ for }}k=1,2,\ldots ,} where again the constants C and r were determined by the requirements that the sum of all the probabilities must be 1 and the expected value must be μ. For example, in the case that x k = k , this gives C = 1 μ − 1 , r = μ − 1 μ , {\displaystyle C={\frac {1}{\mu -1}},\quad \quad r={\frac {\mu -1}{\mu }},}
such that respective maximum entropy distribution is the geometric distribution .
For a continuous random variable θ i {\displaystyle \theta _{i}} distributed about the unit circle, the Von Mises distribution maximizes the entropy when the real and imaginary parts of the first circular moment are specified [ 9 ] or, equivalently, the circular mean and circular variance are specified.
When the mean and variance of the angles θ i {\displaystyle \theta _{i}} modulo 2 π {\displaystyle 2\pi } are specified, the wrapped normal distribution maximizes the entropy. [ 9 ]
There exists an upper bound on the entropy of continuous random variables on R {\displaystyle \mathbb {R} } with a specified mean, variance, and skew. However, there is no distribution which achieves this upper bound , because p ( x ) = c exp ( λ 1 x + λ 2 x 2 + λ 3 x 3 ) {\displaystyle p(x)=c\exp {(\lambda _{1}x+\lambda _{2}x^{2}+\lambda _{3}x^{3})}} is unbounded when λ 3 ≠ 0 {\displaystyle \lambda _{3}\neq 0} (see Cover & Thomas (2006: chapter 12)).
However, the maximum entropy is ε -achievable: a distribution's entropy can be arbitrarily close to the upper bound. Start with a normal distribution of the specified mean and variance. To introduce a positive skew, perturb the normal distribution upward by a small amount at a value many σ larger than the mean. The skewness, being proportional to the third moment, will be affected more than the lower order moments.
This is a special case of the general case in which the exponential of any odd-order polynomial in x will be unbounded on R {\displaystyle \mathbb {R} } . For example, c e λ x {\displaystyle ce^{\lambda x}} will likewise be unbounded on R {\displaystyle \mathbb {R} } , but when the support is limited to a bounded or semi-bounded interval the upper entropy bound may be achieved (e.g. if x lies in the interval [0,∞] and λ< 0 , the exponential distribution will result).
Every distribution with log-concave density is a maximal entropy distribution with specified mean μ and deviation risk measure D . [ 10 ]
In particular, the maximal entropy distribution with specified mean E ( X ) ≡ μ {\displaystyle E(X)\equiv \mu } and deviation D ( X ) ≡ d {\displaystyle D(X)\equiv d} is:
In the table below, each listed distribution maximizes the entropy for a particular set of functional constraints listed in the third column, and the constraint that x {\displaystyle x} be included in the support of the probability density, which is listed in the fourth column. [ 6 ] [ 7 ]
Several listed examples ( Bernoulli , geometric , exponential , Laplace , Pareto ) are trivially true, because their associated constraints are equivalent to the assignment of their entropy. They are included anyway because their constraint is related to a common or easily measured quantity.
For reference, Γ ( x ) = ∫ 0 ∞ e − t t x − 1 d t {\displaystyle \Gamma (x)=\int _{0}^{\infty }e^{-t}t^{x-1}\,dt} is the gamma function , ψ ( x ) = d d x ln Γ ( x ) = Γ ′ ( x ) Γ ( x ) {\displaystyle \psi (x)={\frac {d}{dx}}\ln \Gamma (x)={\frac {\Gamma '(x)}{\Gamma (x)}}} is the digamma function , B ( p , q ) = Γ ( p ) Γ ( q ) Γ ( p + q ) {\displaystyle B(p,q)={\frac {\Gamma (p)\,\Gamma (q)}{\Gamma (p+q)}}} is the beta function , and γ E {\displaystyle \gamma _{\mathsf {E}}} is the Euler-Mascheroni constant .
E [ ln X ] = ψ ( k ) − ln ( λ ) {\displaystyle \operatorname {E} [\ln X]=\psi (k)-\ln(\lambda )}
The maximum entropy principle can be used to upper bound the entropy of statistical mixtures. [ 12 ] | https://en.wikipedia.org/wiki/Maximum_entropy_probability_distribution |
In physics , maximum entropy thermodynamics (colloquially, MaxEnt thermodynamics ) views equilibrium thermodynamics and statistical mechanics as inference processes. More specifically, MaxEnt applies inference techniques rooted in Shannon information theory , Bayesian probability , and the principle of maximum entropy . These techniques are relevant to any situation requiring prediction from incomplete or insufficient data (e.g., image reconstruction , signal processing , spectral analysis , and inverse problems ). MaxEnt thermodynamics began with two papers by Edwin T. Jaynes published in the 1957 Physical Review . [ 1 ] [ 2 ]
Central to the MaxEnt thesis is the principle of maximum entropy . It demands as given some partly specified model and some specified data related to the model. It selects a preferred probability distribution to represent the model. The given data state "testable information" [ 3 ] [ 4 ] about the probability distribution , for example particular expectation values, but are not in themselves sufficient to uniquely determine it. The principle states that one should prefer the distribution which maximizes the Shannon information entropy ,
This is known as the Gibbs algorithm , having been introduced by J. Willard Gibbs in 1878, to set up statistical ensembles to predict the properties of thermodynamic systems at equilibrium. It is the cornerstone of the statistical mechanical analysis of the thermodynamic properties of equilibrium systems (see partition function ).
A direct connection is thus made between the equilibrium thermodynamic entropy S Th , a state function of pressure, volume, temperature, etc., and the information entropy for the predicted distribution with maximum uncertainty conditioned only on the expectation values of those variables:
k B , the Boltzmann constant , has no fundamental physical significance here, but is necessary to retain consistency with the previous historical definition of entropy by Clausius (1865) (see Boltzmann constant ).
However, the MaxEnt school argue that the MaxEnt approach is a general technique of statistical inference, with applications far beyond this. It can therefore also be used to predict a distribution for "trajectories" Γ "over a period of time" by maximising:
This "information entropy" does not necessarily have a simple correspondence with thermodynamic entropy. But it can be used to predict features of nonequilibrium thermodynamic systems as they evolve over time.
For non-equilibrium scenarios, in an approximation that assumes local thermodynamic equilibrium , with the maximum entropy approach, the Onsager reciprocal relations and the Green–Kubo relations fall out directly. The approach also creates a theoretical framework for the study of some very special cases of far-from-equilibrium scenarios, making the derivation of the entropy production fluctuation theorem straightforward. For non-equilibrium processes, as is so for macroscopic descriptions, a general definition of entropy for microscopic statistical mechanical accounts is also lacking.
Technical note : For the reasons discussed in the article differential entropy , the simple definition of Shannon entropy ceases to be directly applicable for random variables with continuous probability distribution functions . Instead the appropriate quantity to maximize is the "relative information entropy",
H c is the negative of the Kullback–Leibler divergence , or discrimination information, of m ( x ) from p ( x ), where m ( x ) is a prior invariant measure for the variable(s). The relative entropy H c is always less than zero, and can be thought of as (the negative of) the number of bits of uncertainty lost by fixing on p ( x ) rather than m ( x ). Unlike the Shannon entropy, the relative entropy H c has the advantage of remaining finite and well-defined for continuous x , and invariant under 1-to-1 coordinate transformations. The two expressions coincide for discrete probability distributions , if one can make the assumption that m ( x i ) is uniform – i.e. the principle of equal a-priori probability , which underlies statistical thermodynamics.
Adherents to the MaxEnt viewpoint take a clear position on some of the conceptual/philosophical questions in thermodynamics. This position is sketched below.
Jaynes (1985, [ 5 ] 2003, [ 6 ] et passim ) discussed the concept of probability. According to the MaxEnt viewpoint, the probabilities in statistical mechanics are determined jointly by two factors: by respectively specified particular models for the underlying state space (e.g. Liouvillian phase space ); and by respectively specified particular partial descriptions of the system (the macroscopic description of the system used to constrain the MaxEnt probability assignment). The probabilities are objective in the sense that, given these inputs, a uniquely defined probability distribution will result, the same for every rational investigator, independent of the subjectivity or arbitrary opinion of particular persons. The probabilities are epistemic in the sense that they are defined in terms of specified data and derived from those data by definite and objective rules of inference, the same for every rational investigator. [ 7 ] Here the word epistemic, which refers to objective and impersonal scientific knowledge, the same for every rational investigator, is used in the sense that contrasts it with opiniative, which refers to the subjective or arbitrary beliefs of particular persons; this contrast was used by Plato and Aristotle , and stands reliable today.
Jaynes also used the word 'subjective' in this context because others have used it in this context. He accepted that in a sense, a state of knowledge has a subjective aspect, simply because it refers to thought, which is a mental process. But he emphasized that the principle of maximum entropy refers only to thought which is rational and objective, independent of the personality of the thinker. In general, from a philosophical viewpoint, the words 'subjective' and 'objective' are not contradictory; often an entity has both subjective and objective aspects. Jaynes explicitly rejected the criticism of some writers that, just because one can say that thought has a subjective aspect, thought is automatically non-objective. He explicitly rejected subjectivity as a basis for scientific reasoning, the epistemology of science; he required that scientific reasoning have a fully and strictly objective basis. [ 8 ] Nevertheless, critics continue to attack Jaynes, alleging that his ideas are "subjective". One writer even goes so far as to label Jaynes' approach as "ultrasubjectivist", [ 9 ] and to mention "the panic that the term subjectivism created amongst physicists". [ 10 ]
The probabilities represent both the degree of knowledge and lack of information in the data and the model used in the analyst's macroscopic description of the system, and also what those data say about the nature of the underlying reality.
The fitness of the probabilities depends on whether the constraints of the specified macroscopic model are a sufficiently accurate and/or complete description of the system to capture all of the experimentally reproducible behavior. This cannot be guaranteed, a priori . For this reason MaxEnt proponents also call the method predictive statistical mechanics . The predictions can fail. But if they do, this is informative, because it signals the presence of new constraints needed to capture reproducible behavior in the system, which had not been taken into account.
The thermodynamic entropy (at equilibrium) is a function of the state variables of the model description. It is therefore as "real" as the other variables in the model description. If the model constraints in the probability assignment are a "good" description, containing all the information needed to predict reproducible experimental results, then that includes all of the results one could predict using the formulae involving entropy from classical thermodynamics. To that extent, the MaxEnt S Th is as "real" as the entropy in classical thermodynamics.
Of course, in reality there is only one real state of the system. [ citation needed ] The entropy is not a direct function of that state. It is a function of the real state only through the (subjectively chosen) macroscopic model description.
The Gibbsian ensemble idealizes the notion of repeating an experiment again and again on different systems, not again and again on the same system. So long-term time averages and the ergodic hypothesis , despite the intense interest in them in the first part of the twentieth century, strictly speaking are not relevant to the probability assignment for the state one might find the system in.
However, this changes if there is additional knowledge that the system is being prepared in a particular way some time before the measurement. One must then consider whether this gives further information which is still relevant at the time of measurement. The question of how 'rapidly mixing' different properties of the system are then becomes very much of interest. Information about some degrees of freedom of the combined system may become unusable very quickly; information about other properties of the system may go on being relevant for a considerable time.
If nothing else, the medium and long-run time correlation properties of the system are interesting subjects for experimentation in themselves. Failure to accurately predict them is a good indicator that relevant macroscopically determinable physics may be missing from the model.
According to Liouville's theorem for Hamiltonian dynamics , the hyper-volume of a cloud of points in phase space remains constant as the system evolves. Therefore, the information entropy must also remain constant, if we condition on the original information, and then follow each of those microstates forward in time:
However, as time evolves, that initial information we had becomes less directly accessible. Instead of being easily summarizable in the macroscopic description of the system, it increasingly relates to very subtle correlations between the positions and momenta of individual molecules. (Compare to Boltzmann's H-theorem .) Equivalently, it means that the probability distribution for the whole system, in 6N-dimensional phase space, becomes increasingly irregular, spreading out into long thin fingers rather than the initial tightly defined volume of possibilities.
Classical thermodynamics is built on the assumption that entropy is a state function of the macroscopic variables —i.e., that none of the history of the system matters, so that it can all be ignored.
The extended, wispy, evolved probability distribution, which still has the initial Shannon entropy S Th (1) , should reproduce the expectation values of the observed macroscopic variables at time t 2 . However it will no longer necessarily be a maximum entropy distribution for that new macroscopic description. On the other hand, the new thermodynamic entropy S Th (2) assuredly will measure the maximum entropy distribution, by construction. Therefore, we expect:
At an abstract level, this result implies that some of the information we originally had about the system has become "no longer useful" at a macroscopic level. At the level of the 6 N -dimensional probability distribution, this result represents coarse graining —i.e., information loss by smoothing out very fine-scale detail.
Some caveats should be considered with the above.
1. Like all statistical mechanical results according to the MaxEnt school, this increase in thermodynamic entropy is only a prediction . It assumes in particular that the initial macroscopic description contains all of the information relevant to predicting the later macroscopic state. This may not be the case, for example if the initial description fails to reflect some aspect of the preparation of the system which later becomes relevant. In that case the "failure" of a MaxEnt prediction tells us that there is something more which is relevant that we may have overlooked in the physics of the system.
It is also sometimes suggested that quantum measurement , especially in the decoherence interpretation, may give an apparently unexpected reduction in entropy per this argument, as it appears to involve macroscopic information becoming available which was previously inaccessible. (However, the entropy accounting of quantum measurement is tricky, because to get full decoherence one may be assuming an infinite environment, with an infinite entropy).
2. The argument so far has glossed over the question of fluctuations . It has also implicitly assumed that the uncertainty predicted at time t 1 for the variables at time t 2 will be much smaller than the measurement error. But if the measurements do meaningfully update our knowledge of the system, our uncertainty as to its state is reduced, giving a new S I (2) which is less than S I (1) . (Note that if we allow ourselves the abilities of Laplace's demon , the consequences of this new information can also be mapped backwards, so our uncertainty about the dynamical state at time t 1 is now also reduced from S I (1) to S I (2) ).
We know that S Th (2) > S I (2) ; but we can now no longer be certain that it is greater than S Th (1) = S I (1) . This then leaves open the possibility for fluctuations in S Th . The thermodynamic entropy may go "down" as well as up. A more sophisticated analysis is given by the entropy Fluctuation Theorem , which can be established as a consequence of the time-dependent MaxEnt picture.
3. As just indicated, the MaxEnt inference runs equally well in reverse. So given a particular final state, we can ask, what can we "retrodict" to improve our knowledge about earlier states? However the Second Law argument above also runs in reverse: given macroscopic information at time t 2 , we should expect it too to become less useful. The two procedures are time-symmetric. But now the information will become less and less useful at earlier and earlier times. (Compare with Loschmidt's paradox .) The MaxEnt inference would predict that the most probable origin of a currently low-entropy state would be as a spontaneous fluctuation from an earlier high entropy state. But this conflicts with what we know to have happened, namely that entropy has been increasing steadily, even back in the past.
The MaxEnt proponents' response to this would be that such a systematic failing in the prediction of a MaxEnt inference is a "good" thing. [ 11 ] It means that there is thus clear evidence that some important physical information has been missed in the specification the problem. If it is correct that the dynamics "are" time-symmetric , it appears that we need to put in by hand a prior probability that initial configurations with a low thermodynamic entropy are more likely than initial configurations with a high thermodynamic entropy. This cannot be explained by the immediate dynamics. Quite possibly, it arises as a reflection of the evident time-asymmetric evolution of the universe on a cosmological scale (see arrow of time ).
The Maximum Entropy thermodynamics has some important opposition, in part because of the relative paucity of published results from the MaxEnt school, especially with regard to new testable predictions far-from-equilibrium. [ 12 ]
The theory has also been criticized in the grounds of internal consistency. For instance, Radu Balescu provides a strong criticism of the MaxEnt School and of Jaynes' work. Balescu states that Jaynes' and coworkers theory is based on a non-transitive evolution law that produces ambiguous results. Although some difficulties of the theory can be cured, the theory "lacks a solid foundation" and "has not led to any new concrete result". [ 13 ]
Though the maximum entropy approach is based directly on informational entropy, it is applicable to physics only when there is a clear physical definition of entropy. There is no clear unique general physical definition of entropy for non-equilibrium systems, which are general physical systems considered during a process rather than thermodynamic systems in their own internal states of thermodynamic equilibrium. [ 14 ] It follows that the maximum entropy approach will not be applicable to non-equilibrium systems until there is found a clear physical definition of entropy. This problem is related to the fact that heat may be transferred from a hotter to a colder physical system even when local thermodynamic equilibrium does not hold so that neither system has a well defined temperature. Classical entropy is defined for a system in its own internal state of thermodynamic equilibrium, which is defined by state variables, with no non-zero fluxes, so that flux variables do not appear as state variables. But for a strongly non-equilibrium system, during a process, the state variables must include non-zero flux variables. Classical physical definitions of entropy do not cover this case, especially when the fluxes are large enough to destroy local thermodynamic equilibrium. In other words, for entropy for non-equilibrium systems in general, the definition will need at least to involve specification of the process including non-zero fluxes, beyond the classical static thermodynamic state variables. The 'entropy' that is maximized needs to be defined suitably for the problem at hand. If an inappropriate 'entropy' is maximized, a wrong result is likely. In principle, maximum entropy thermodynamics does not refer narrowly and only to classical thermodynamic entropy. It is about informational entropy applied to physics, explicitly depending on the data used to formulate the problem at hand. According to Attard, for physical problems analyzed by strongly non-equilibrium thermodynamics, several physically distinct kinds of entropy need to be considered, including what he calls second entropy. Attard writes: "Maximizing the second entropy over the microstates in the given initial macrostate gives the most likely target macrostate.". [ 15 ] The physically defined second entropy can also be considered from an informational viewpoint. | https://en.wikipedia.org/wiki/Maximum_entropy_thermodynamics |
Maximum life span (or, for humans , maximum reported age at death ) is a measure of the maximum amount of time one or more members of a population have been observed to survive between birth and death . The term can also denote an estimate of the maximum amount of time that a member of a given species could survive between birth and death, provided circumstances that are optimal to that member's longevity .
Most living species have an upper limit on the number of times somatic cells not expressing telomerase can divide . This is called the Hayflick limit , although this number of cell divisions does not strictly control lifespan.
In animal studies, maximum span is often taken to be the mean life span of the most long-lived 10% of a given cohort . By another definition, however, maximum life span corresponds to the age at which the oldest known member of a species or experimental group has died. Calculation of the maximum life span in the latter sense depends upon the initial sample size. [ 1 ]
Maximum life span contrasts with mean life span (average life span , life expectancy ) , and longevity . Mean life span varies with susceptibility to disease, accident , suicide and homicide , whereas maximum life span is determined by "rate of aging". [ 2 ] [ 3 ] [ failed verification ] Longevity refers only to the characteristics of the especially long lived members of a population, such as infirmities as they age or compression of morbidity , and not the specific life span of an individual. [ citation needed ]
The longest living person whose dates of birth and death were verified according to the modern norms of Guinness World Records and the Gerontology Research Group was Jeanne Calment (1875–1997), a Frenchwoman who is verified to have lived to 122. The oldest male lifespan has only been verified as 116, by Japanese man Jiroemon Kimura . Reduction of infant mortality has accounted for most of the increased average life span longevity , but since the 1960s mortality rates among those over 80 years have decreased by about 1.5% per year. According to James Vaupel , "The progress being made in lengthening lifespans and postponing senescence is entirely due to medical and public-health efforts, rising standards of living, better education, healthier nutrition and more salubrious lifestyles." [ 4 ] Animal studies suggest that further lengthening of median human lifespan as well as maximum lifespan could be achieved through " calorie restriction mimetic " drugs or by directly reducing food consumption. [ 5 ] Although calorie restriction has not been proven to extend the maximum human life span as of 2014 [update] , results in ongoing primate studies have demonstrated that the assumptions derived from rodents are valid in primates. [ 6 ] [ 7 ]
It has been proposed that no fixed theoretical limit to human longevity is apparent today. [ 8 ] [ 9 ] Studies in the biodemography of human longevity indicate a late-life mortality deceleration law : that death rates level off at advanced ages to a late-life mortality plateau. That is, there is no fixed upper limit to human longevity, or fixed maximal human lifespan. [ 10 ] This law was first quantified in 1939, when researchers found that the one-year probability of death at advanced age asymptotically approaches a limit of 44% for women and 54% for men. [ 11 ]
However, this evidence depends on the existence of a late-life plateaus and deceleration that can be explained, in humans and other species, by the existence of very rare errors. [ 12 ] [ 13 ] Age-coding error rates below 1 in 10,000 are sufficient to make artificial late-life plateaus, and errors below 1 in 100,000 can generate late-life mortality deceleration. These error rates cannot be ruled out by examining documents [ 13 ] (the standard) because of successful pension fraud, identity theft, forgeries and errors that leave no documentary evidence. This capacity for errors to explain late-life plateaus solves the "fundamental question in aging research is whether humans and other species possess an immutable life-span limit" and suggests that a limit to human life span exists. [ 14 ] A theoretical study suggested the maximum human lifespan to be around 125 years using a modified stretched exponential function for human survival curves. [ 15 ] In another study, researchers claimed that there exists a maximum lifespan for humans, and that the human maximal lifespan has been declining since the 1990s. [ 16 ] A theoretical study also suggested that the maximum human life expectancy at birth is limited by the human life characteristic value δ, which is around 104 years. [ 17 ]
In 2017, the United Nations conducted a Bayesian sensitivity analysis of global population burden based on life expectancy projection at birth in future decades. The 95% prediction interval of average life expectancy rises as high as 106 years old by 2090, with ongoing and layered effects on world population and demography should that happen. However, the prediction interval is extremely wide. [ 18 ]
Evidence for maximum lifespan is also provided by the dynamics of physiological indices with age. For example, scientists have observed that a person's VO 2 max value (a measure of the volume of oxygen flow to the cardiac muscle) decreases as a function of age. Therefore, the maximum lifespan of a person could be determined by calculating when the person's VO 2 max value drops below the basal metabolic rate necessary to sustain life, which is approximately 3 ml per kg per minute. [ 19 ] [ page needed ] On the basis of this hypothesis, athletes with a VO 2 max value between 50 and 60 at age 20 would be expected "to live for 100 to 125 years, provided they maintained their physical activity so that their rate of decline in VO 2 max remained constant". [ 20 ]
Small animals such as birds and squirrels rarely live to their maximum life span, usually dying of accidents , disease or predation . [ citation needed ]
The maximum life span of most species is documented in the AnAge repository (The Animal Ageing and Longevity Database). [ 22 ]
Maximum life span is usually longer for species that are larger, at least among endotherms, [ 23 ] or have effective defenses against predation, such as bat or bird flight, [ 24 ] arboreality, [ 25 ] chemical defenses [ 26 ] or living in social groups. [ 27 ] Among mammals , the presence of a caecal appendix is also correlated with greater maximal longevity. [ 28 ]
The differences in life span between species demonstrate the role of genetics in determining maximum life span ("rate of aging"). The records (in years) are these:
The longest-lived vertebrates have been variously described as
Invertebrate species which continue to grow as long as they live ( e.g., certain clams, some coral species) can on occasion live hundreds of years:
Plants are referred to as annuals which live only one year, biennials which live two years, and perennials which live longer than that. The longest-lived perennials, woody-stemmed plants such as trees and bushes, often live for hundreds and even thousands of years (one may question whether or not they may die of old age). A giant sequoia , General Sherman , is alive and well in its third millennium . A Great Basin Bristlecone Pine called Methuselah is 4,856 years old. [ 55 ] Another Bristlecone Pine called Prometheus was a little older still, showing 4,862 years of growth rings. The exact age of Prometheus, however, remains unknown as it is likely that growth rings did not form every year due to the harsh environment in which it grew but it was estimated to be ~4,900 years old when it was cut down in 1964. [ 56 ] The oldest known plant (possibly oldest living thing) is a clonal Quaking Aspen ( Populus tremuloides ) tree colony in the Fishlake National Forest in Utah called Pando at about 16,000 years. Lichen , a symbiotic algae and fungal proto-plant, such as Rhizocarpon geographicum can live upwards of 10,000 years. [ citation needed ]
"Maximum life span" here means the mean life span of the most long-lived 10% of a given cohort. Caloric restriction has not yet been shown to break mammalian world records for longevity. Rats , mice , and hamsters experience maximum life-span extension from a diet that contains all of the nutrients but only 40–60% of the calories that the animals consume when they can eat as much as they want. Mean life span is increased 65% and maximum life span is increased 50%, when caloric restriction is begun just before puberty . [ 57 ] For fruit flies the life extending benefits of calorie restriction are gained immediately at any age upon beginning calorie restriction and ended immediately at any age upon resuming full feeding. [ 58 ]
Most biomedical gerontologists believe that biomedical molecular engineering will eventually extend maximum lifespan and even bring about rejuvenation . [ 59 ] Anti-aging drugs are a potential tool for extending life. [ 60 ]
Aubrey de Grey , a theoretical gerontologist, has proposed that aging can be reversed by strategies for engineered negligible senescence . De Grey has established The Methuselah Mouse Prize to award money to researchers who can extend the maximum life span of mice. So far, three Mouse Prizes have been awarded: one for breaking longevity records to Dr. Andrzej Bartke of Southern Illinois University (using GhR knockout mice); one for late-onset rejuvenation strategies to Dr. Stephen Spindler of the University of California (using caloric restriction initiated late in life); and one to Dr. Z. Dave Sharp for his work with the pharmaceutical rapamycin . [ 61 ] [ better source needed ]
Accumulated DNA damage appears to be a limiting factor in the determination of maximum life span. The theory that DNA damage is the primary cause of aging, and thus a principal determinant of maximum life span, has attracted increased interest in recent years. This is based, in part, on evidence in humans and mice that inherited deficiencies in DNA repair genes often cause accelerated aging. [ 62 ] [ 63 ] [ 64 ] There is also substantial evidence that DNA damage accumulates with age in mammalian tissues, such as those of the brain, muscle, liver, and kidney (reviewed by Bernstein et al. [ 65 ] and see DNA damage theory of aging and DNA damage (naturally occurring) ). One expectation of the theory (that DNA damage is the primary cause of aging) is that among species with differing maximum life spans, the capacity to repair DNA damage should correlate with lifespan. The first experimental test of this idea was by Hart and Setlow [ 66 ] who measured the capacity of cells from seven different mammalian species to carry out DNA repair. They found that nucleotide excision repair capability increased systematically with species longevity. This correlation was striking and stimulated a series of 11 additional experiments in different laboratories over succeeding years on the relationship of nucleotide excision repair and life span in mammalian species (reviewed by Bernstein and Bernstein [ 67 ] ). In general, the findings of these studies indicated a good correlation between nucleotide excision repair capacity and life span. The association between nucleotide excision repair capability and longevity is strengthened by the evidence that defects in nucleotide excision repair proteins in humans and rodents cause features of premature aging, as reviewed by Diderich. [ 63 ]
Further support for the theory that DNA damage is the primary cause of aging comes from study of Poly ADP ribose polymerases (PARPs). PARPs are enzymes that are activated by DNA strand breaks and play a role in DNA base excision repair. Burkle et al. reviewed evidence that PARPs, and especially PARP-1 , are involved in maintaining mammalian longevity. [ 68 ] The life span of 13 mammalian species correlated with poly(ADP ribosyl)ation capability measured in mononuclear cells. Furthermore, lymphoblastoid cell lines from peripheral blood lymphocytes of humans over age 100 had a significantly higher poly(ADP-ribosyl)ation capability than control cell lines from younger individuals. [ citation needed ] | https://en.wikipedia.org/wiki/Maximum_life_span |
Maximum likelihood sequence estimation ( MLSE ) is a mathematical algorithm that extracts useful data from a noisy data stream.
For an optimized detector for digital signals the priority is not to reconstruct the transmitter signal, but it should do a best estimation of the transmitted data with the least possible number of errors. The receiver emulates the distorted channel. All possible transmitted data streams are fed into this distorted channel model. The receiver compares the time response with the actual received signal and determines the most likely signal.
In cases that are most computationally straightforward, root mean square deviation can be used as the decision criterion [ 1 ] for the lowest error probability.
Suppose that there is an underlying signal { x ( t )}, of which an observed signal { r ( t )} is available. The observed signal r is related to x via a transformation that may be nonlinear and may involve attenuation, and would usually involve the incorporation of random noise . The statistical parameters of this transformation are assumed to be known. The problem to be solved is to use the observations { r ( t )} to create a good estimate of { x ( t )}.
Maximum likelihood sequence estimation is formally the application of maximum likelihood to this problem. That is, the estimate of { x ( t )} is defined to be a sequence of values which maximize the functional
where p ( r | x ) denotes the conditional joint probability density function of the observed series { r ( t )} given that the underlying series has the values { x ( t )}.
In contrast, the related method of maximum a posteriori estimation is formally the application of the maximum a posteriori (MAP) estimation approach. This is more complex than maximum likelihood sequence estimation and requires a known distribution (in Bayesian terms , a prior distribution ) for the underlying signal. In this case the estimate of { x ( t )} is defined to be a sequence of values which maximize the functional
where p ( x | r ) denotes the conditional joint probability density function of the underlying series { x ( t )} given that the observed series has taken the values { r ( t )}. Bayes' theorem implies that
In cases where the contribution of random noise is additive and has a multivariate normal distribution , the problem of maximum likelihood sequence estimation can be reduced to that of a least squares minimization. | https://en.wikipedia.org/wiki/Maximum_likelihood_sequence_estimation |
In mathematics , the maximum modulus principle in complex analysis states that if f {\displaystyle f} is a holomorphic function , then the modulus | f | {\displaystyle |f|} cannot exhibit a strict maximum that is strictly within the domain of f {\displaystyle f} .
In other words, either f {\displaystyle f} is locally a constant function , or, for any point z 0 {\displaystyle z_{0}} inside the domain of f {\displaystyle f} there exist other points arbitrarily close to z 0 {\displaystyle z_{0}} at which | f | {\displaystyle |f|} takes larger values.
Let f {\displaystyle f} be a holomorphic function on some connected open subset D {\displaystyle D} of the complex plane C {\displaystyle \mathbb {C} } and taking complex values. If z 0 {\displaystyle z_{0}} is a point in D {\displaystyle D} such that
for all z {\displaystyle z} in some neighborhood of z 0 {\displaystyle z_{0}} , then f {\displaystyle f} is constant on D {\displaystyle D} .
This statement can be viewed as a special case of the open mapping theorem , which states that a nonconstant holomorphic function maps open sets to open sets: If | f | {\displaystyle |f|} attains a local maximum at z {\displaystyle z} , then the image of a sufficiently small open neighborhood of z {\displaystyle z} cannot be open, so f {\displaystyle f} is constant.
Suppose that D {\displaystyle D} is a bounded nonempty connected open subset of C {\displaystyle \mathbb {C} } .
Let D ¯ {\displaystyle {\overline {D}}} be the closure of D {\displaystyle D} .
Suppose that f : D ¯ → C {\displaystyle f\colon {\overline {D}}\to \mathbb {C} } is a continuous function that is holomorphic on D {\displaystyle D} .
Then | f ( z ) | {\displaystyle |f(z)|} attains a maximum at some point of the boundary of D {\displaystyle D} .
This follows from the first version as follows. Since D ¯ {\displaystyle {\overline {D}}} is compact and nonempty, the continuous function | f ( z ) | {\displaystyle |f(z)|} attains a maximum at some point z 0 {\displaystyle z_{0}} of D ¯ {\displaystyle {\overline {D}}} . If z 0 {\displaystyle z_{0}} is not on the boundary, then the maximum modulus principle implies that f {\displaystyle f} is constant, so | f ( z ) | {\displaystyle |f(z)|} also attains the same maximum at any point of the boundary.
For a holomorphic function f {\displaystyle f} on a connected open set D {\displaystyle D} of C {\displaystyle \mathbb {C} } , if z 0 {\displaystyle z_{0}} is a point in D {\displaystyle D} such that
for all z {\displaystyle z} in some neighborhood of z 0 {\displaystyle z_{0}} , then f {\displaystyle f} is constant on D {\displaystyle D} .
Proof: Apply the maximum modulus principle to 1 / f {\displaystyle 1/f} .
One can use the equality
for complex natural logarithms to deduce that ln | f ( z ) | {\displaystyle \ln |f(z)|} is a harmonic function . Since z 0 {\displaystyle z_{0}} is a local maximum for this function also, it follows from the maximum principle that | f ( z ) | {\displaystyle |f(z)|} is constant. Then, using the Cauchy–Riemann equations we show that f ′ ( z ) {\displaystyle f'(z)} = 0, and thus that f ( z ) {\displaystyle f(z)} is constant as well. Similar reasoning shows that | f ( z ) | {\displaystyle |f(z)|} can only have a local minimum (which necessarily has value 0) at an isolated zero of f ( z ) {\displaystyle f(z)} .
Another proof works by using Gauss's mean value theorem to "force" all points within overlapping open disks to assume the same value as the maximum. The disks are laid such that their centers form a polygonal path from the value where f ( z ) {\displaystyle f(z)} is maximized to any other point in the domain, while being totally contained within the domain. Thus the existence of a maximum value implies that all the values in the domain are the same, thus f ( z ) {\displaystyle f(z)} is constant.
Source: [ 1 ]
As D {\displaystyle D} is open, there exists B ¯ ( a , r ) {\displaystyle {\overline {B}}(a,r)} (a closed ball centered at a ∈ D {\displaystyle a\in D} with radius r > 0 {\displaystyle r>0} ) such that B ¯ ( a , r ) ⊂ D {\displaystyle {\overline {B}}(a,r)\subset D} . We then define the boundary of the closed ball with positive orientation as γ ( t ) = a + r e i t , t ∈ [ 0 , 2 π ] {\displaystyle \gamma (t)=a+re^{it},t\in [0,2\pi ]} . Invoking Cauchy's integral formula, we obtain
For all t ∈ [ 0 , 2 π ] {\displaystyle t\in [0,2\pi ]} , | f ( a ) | − | f ( a + r e i t ) | ≥ 0 {\displaystyle |f(a)|-|f(a+re^{it})|\geq 0} , so | f ( a ) | = | f ( a + r e i t ) | {\displaystyle |f(a)|=|f(a+re^{it})|} . This also holds for all balls of radius less than r {\displaystyle r} centered at a {\displaystyle a} . Therefore, f ( z ) = f ( a ) {\displaystyle f(z)=f(a)} for all z ∈ B ¯ ( a , r ) {\displaystyle z\in {\overline {B}}(a,r)} .
Now consider the constant function g ( z ) = f ( a ) {\displaystyle g(z)=f(a)} for all z ∈ D {\displaystyle z\in D} . Then one can construct a sequence of distinct points located in B ¯ ( a , r ) {\displaystyle {\overline {B}}(a,r)} where the holomorphic function g − f {\displaystyle g-f} vanishes. As B ¯ ( a , r ) {\displaystyle {\overline {B}}(a,r)} is closed, the sequence converges to some point in B ¯ ( a , r ) ∈ D {\displaystyle {\overline {B}}(a,r)\in D} . This means f − g {\displaystyle f-g} vanishes everywhere in D {\displaystyle D} which implies f ( z ) = f ( a ) {\displaystyle f(z)=f(a)} for all z ∈ D {\displaystyle z\in D} .
A physical interpretation of this principle comes from the heat equation . That is, since log | f ( z ) | {\displaystyle \log |f(z)|} is harmonic, it is thus the steady state of a heat flow on the region D {\displaystyle D} . Suppose a strict maximum was attained on the interior of D {\displaystyle D} , the heat at this maximum would be dispersing to the points around it, which would contradict the assumption that this represents the steady state of a system.
The maximum modulus principle has many uses in complex analysis, and may be used to prove the following: | https://en.wikipedia.org/wiki/Maximum_modulus_principle |
In phylogenetics and computational phylogenetics , maximum parsimony is an optimality criterion under which the phylogenetic tree that minimizes the total number of character-state changes (or minimizes the cost of differentially weighted character-state changes). Under the maximum-parsimony criterion, the optimal tree will minimize the amount of homoplasy (i.e., convergent evolution , parallel evolution , and evolutionary reversals ). In other words, under this criterion, the shortest possible tree that explains the data is considered best. Some of the basic ideas behind maximum parsimony were presented by James S. Farris [ 1 ] in 1970 and Walter M. Fitch in 1971. [ 2 ]
Maximum parsimony is an intuitive and simple criterion, and it is popular for this reason. However, although it is easy to score a phylogenetic tree (by counting the number of character-state changes), there is no algorithm to quickly generate the most-parsimonious tree. Instead, the most-parsimonious tree must be sought in "tree space" (i.e., amongst all possible trees). For a small number of taxa (i.e., fewer than nine) it is possible to do an exhaustive search , in which every possible tree is scored, and the best one is selected. For nine to twenty taxa, it will generally be preferable to use branch-and-bound , which is also guaranteed to return the best tree. For greater numbers of taxa, a heuristic search must be performed.
Because the most-parsimonious tree is always the shortest possible tree, this means that—in comparison to a hypothetical "true" tree that actually describes the unknown evolutionary history of the organisms under study—the "best" tree according to the maximum-parsimony criterion will often underestimate the actual evolutionary change that could have occurred. In addition, maximum parsimony is not statistically consistent. That is, it is not guaranteed to produce the true tree with high probability, given sufficient data. As demonstrated in 1978 by Joe Felsenstein , [ 3 ] maximum parsimony can be inconsistent under certain conditions, such as long-branch attraction . Of course, any phylogenetic algorithm could also be statistically inconsistent if the model it employs to estimate the preferred tree does not accurately match the way that evolution occurred in that clade. This is unknowable. Therefore, while statistical consistency is an interesting theoretical property, it lies outside the realm of testability, and is irrelevant to empirical phylogenetic studies. [ 4 ]
In phylogenetics, parsimony is mostly interpreted as favoring the trees that minimize the amount of evolutionary change required (see for example [ 2 ] ). Alternatively, phylogenetic parsimony can be characterized as favoring the trees that maximize explanatory power by minimizing the number of observed similarities that cannot be explained by inheritance and common descent. [ 5 ] [ 6 ] Minimization of required evolutionary change on the one hand and maximization of observed similarities that can be explained as homology on the other may result in different preferred trees when some observed features are not applicable in some groups that are included in the tree, and the latter can be seen as the more general approach. [ 7 ] [ 8 ] [ 9 ]
While evolution is not an inherently parsimonious process, centuries of scientific experience lend support to the aforementioned principle of parsimony ( Occam's razor ). Namely, the supposition of a simpler, more parsimonious chain of events is preferable to the supposition of a more complicated, less parsimonious chain of events. Hence, parsimony ( sensu lato ) is typically sought in inferring phylogenetic trees, and in scientific explanation generally. [ 10 ]
Parsimony is part of a class of character-based tree estimation methods which use a matrix of discrete phylogenetic characters and character states to infer one or more optimal phylogenetic trees for a set of taxa , commonly a set of species or reproductively isolated populations of a single species. These methods operate by evaluating candidate phylogenetic trees according to an explicit optimality criterion ; the tree with the most favorable score is taken as the best hypothesis of the phylogenetic relationships of the included taxa. Maximum parsimony is used with most kinds of phylogenetic data; until recently, it was the only widely used character-based tree estimation method used for morphological data.
Inferring phylogenies is not a trivial problem. A huge number of possible phylogenetic trees exist for any reasonably sized set of taxa; for example, a mere ten species gives over two million possible unrooted trees. These possibilities must be searched to find a tree that best fits the data according to the optimality criterion. However, the data themselves do not lead to a simple, arithmetic solution to the problem. Ideally, we would expect the distribution of whatever evolutionary characters (such as phenotypic traits or alleles ) to directly follow the branching pattern of evolution. Thus we could say that if two organisms possess a shared character, they should be more closely related to each other than to a third organism that lacks this character (provided that character was not present in the last common ancestor of all three, in which case it would be a symplesiomorphy ). We would predict that bats and monkeys are more closely related to each other than either is to an elephant, because male bats and monkeys possess external testicles , which elephants lack. However, we cannot say that bats and monkeys are more closely related to one another than they are to whales, though the two have external testicles absent in whales, because we believe that the males in the last common ancestral species of the three had external testicles.
However, the phenomena of convergent evolution , parallel evolution , and evolutionary reversals (collectively termed homoplasy ) add an unpleasant wrinkle to the problem of inferring phylogeny. For a number of reasons, two organisms can possess a trait inferred to have not been present in their last common ancestor: If we naively took the presence of this trait as evidence of a relationship, we would infer an incorrect tree. Empirical phylogenetic data may include substantial homoplasy, with different parts of the data suggesting sometimes very different relationships. Methods used to estimate phylogenetic trees are explicitly intended to resolve the conflict within the data by picking the phylogenetic tree that is the best fit to all the data overall, accepting that some data simply will not fit. It is often mistakenly believed that parsimony assumes that convergence is rare; in fact, even convergently derived characters have some value in maximum-parsimony-based phylogenetic analyses, and the prevalence of convergence does not systematically affect the outcome of parsimony-based methods. [ 11 ]
Data that do not fit a tree perfectly are not simply "noise", they can contain relevant phylogenetic signal in some parts of a tree, even if they conflict with the tree overall. In the whale example given above, the lack of external testicles in whales is homoplastic: It reflects a return to the condition inferred to have been present in ancient ancestors of mammals, whose testicles were internal. This inferred similarity between whales and ancient mammal ancestors is in conflict with the tree we accept based on the weight of other characters, since it implies that the mammals with external testicles should form a group excluding whales. However, among the whales, the reversal to internal testicles actually correctly associates the various types of whales (including dolphins and porpoises) into the group Cetacea . Still, the determination of the best-fitting tree—and thus which data do not fit the tree—is a complex process. Maximum parsimony is one method developed to do this.
The input data used in a maximum parsimony analysis is in the form of "characters" for a range of taxa. There is no generally agreed-upon definition of a phylogenetic character, but operationally a character can be thought of as an attribute, an axis along which taxa are observed to vary. These attributes can be physical (morphological), molecular, genetic, physiological, or behavioral. The only widespread agreement on characters seems to be that variation used for character analysis should reflect heritable variation . Whether it must be directly heritable, or whether indirect inheritance (e.g., learned behaviors) is acceptable, is not entirely resolved.
Each character is divided into discrete character states , into which the variations observed are classified. Character states are often formulated as descriptors, describing the condition of the character substrate. For example, the character "eye color" might have the states "blue" and "brown." Characters can have two or more states (they can have only one, but these characters lend nothing to a maximum parsimony analysis, and are often excluded).
Coding characters for phylogenetic analysis is not an exact science, and there are numerous complicating issues. Typically, taxa are scored with the same state if they are more similar to one another in that particular attribute than each is to taxa scored with a different state. This is not straightforward when character states are not clearly delineated or when they fail to capture all of the possible variation in a character. How would one score the previously mentioned character for a taxon (or individual) with hazel eyes? Or green? As noted above, character coding is generally based on similarity: Hazel and green eyes might be lumped with blue because they are more similar to that color (being light), and the character could be then recoded as "eye color: light; dark." Alternatively, there can be multi-state characters, such as "eye color: brown; hazel, blue; green."
Ambiguities in character state delineation and scoring can be a major source of confusion, dispute, and error in phylogenetic analysis using character data. Note that, in the above example, "eyes: present; absent" is also a possible character, which creates issues because "eye color" is not applicable if eyes are not present. For such situations, a "?" ("unknown") is scored, although sometimes "X" or "-" (the latter usually in sequence data) are used to distinguish cases where a character cannot be scored from a case where the state is simply unknown. Current implementations of maximum parsimony generally treat unknown values in the same manner: the reasons the data are unknown have no particular effect on analysis. Effectively, the program treats a ? as if it held the state that would involve the fewest extra steps in the tree (see below), although this is not an explicit step in the algorithm.
Genetic data are particularly amenable to character-based phylogenetic methods such as maximum parsimony because protein and nucleotide sequences are naturally discrete: A particular position in a nucleotide sequence can be either adenine , cytosine , guanine , or thymine / uracil , or a sequence gap ; a position ( residue ) in a protein sequence will be one of the basic amino acids or a sequence gap. Thus, character scoring is rarely ambiguous, except in cases where sequencing methods fail to produce a definitive assignment for a particular sequence position. Sequence gaps are sometimes treated as characters, although there is no consensus on how they should be coded.
Characters can be treated as unordered or ordered. For a binary (two-state) character, this makes little difference. For a multi-state character, unordered characters can be thought of as having an equal "cost" (in terms of number of "evolutionary events") to change from any one state to any other; complementarily, they do not require passing through intermediate states. Ordered characters have a particular sequence in which the states must occur through evolution, such that going between some states requires passing through an intermediate. This can be thought of complementarily as having different costs to pass between different pairs of states. In the eye-color example above, it is possible to leave it unordered, which imposes the same evolutionary "cost" to go from brown-blue, green-blue, green-hazel, etc. Alternatively, it could be ordered brown-hazel-green-blue; this would normally imply that it would cost two evolutionary events to go from brown-green, three from brown-blue, but only one from brown-hazel. This can also be thought of as requiring eyes to evolve through a "hazel stage" to get from brown to green, and a "green stage" to get from hazel to blue, etc. For many characters, it is not obvious if and how they should be ordered. On the contrary, for characters that represent discretization of an underlying continuous variable, like shape, size, and ratio characters, ordering is logical, [ 12 ] and simulations have shown that this improves ability to recover correct clades, while decreasing the recovering of erroneous clades. [ 13 ] [ 14 ] [ 15 ]
There is a lively debate on the utility and appropriateness of character ordering, but no consensus. Some authorities order characters when there is a clear logical, ontogenetic , or evolutionary transition among the states (for example, "legs: short; medium; long"). Some accept only some of these criteria. Some run an unordered analysis, and order characters that show a clear order of transition in the resulting tree (which practice might be accused of circular reasoning ). Some authorities refuse to order characters at all, suggesting that it biases an analysis to require evolutionary transitions to follow a particular path.
It is also possible to apply differential weighting to individual characters. This is usually done relative to a "cost" of 1. Thus, some characters might be seen as more likely to reflect the true evolutionary relationships among taxa, and thus they might be weighted at a value 2 or more; changes in these characters would then count as two evolutionary "steps" rather than one when calculating tree scores (see below). There has been much discussion in the past about character weighting. Most authorities now weight all characters equally, although exceptions are common. For example, allele frequency data is sometimes pooled in bins and scored as an ordered character. In these cases, the character itself is often downweighted so that small changes in allele frequencies count less than major changes in other characters. Also, the third codon position in a coding nucleotide sequence is particularly labile, and is sometimes downweighted, or given a weight of 0, on the assumption that it is more likely to exhibit homoplasy. In some cases, repeated analyses are run, with characters reweighted in inverse proportion to the degree of homoplasy discovered in the previous analysis (termed successive weighting ); this is another technique that might be considered circular reasoning .
Character state changes can also be weighted individually. This is often done for nucleotide sequence data; it has been empirically determined that certain base changes (A-C, A-T, G-C, G-T, and the reverse changes) occur much less often than others (A-G, C-T, and their reverse changes). These changes are therefore often weighted more. As shown above in the discussion of character ordering, ordered characters can be thought of as a form of character state weighting.
Some systematists prefer to exclude characters known to be, or suspected to be, highly homoplastic or that have a large number of unknown entries ("?"). As noted below, theoretical and simulation work has demonstrated that this is likely to sacrifice accuracy rather than improve it. This is also the case with characters that are variable in the terminal taxa: theoretical, congruence, and simulation studies have all demonstrated that such polymorphic characters contain significant phylogenetic information. [ citation needed ]
The time required for a parsimony analysis (or any phylogenetic analysis) is proportional to the number of taxa (and characters) included in the analysis. Also, because more taxa require more branches to be estimated, more uncertainty may be expected in large analyses. Because data collection costs in time and money often scale directly with the number of taxa included, most analyses include only a fraction of the taxa that could have been sampled. Indeed, some authors have contended that four taxa (the minimum required to produce a meaningful unrooted tree) are all that is necessary for accurate phylogenetic analysis, and that more characters are more valuable than more taxa in phylogenetics. This has led to a raging controversy about taxon sampling.
Empirical, theoretical, and simulation studies have led to a number of dramatic demonstrations of the importance of adequate taxon sampling. Most of these can be summarized by a simple observation: a phylogenetic data matrix has dimensions of characters times taxa. Doubling the number of taxa doubles the amount of information in a matrix just as surely as doubling the number of characters. Each taxon represents a new sample for every character, but, more importantly, it (usually) represents a new combination of character states. These character states can not only determine where that taxon is placed on the tree, they can inform the entire analysis, possibly causing different relationships among the remaining taxa to be favored by changing estimates of the pattern of character changes.
The most disturbing weakness of parsimony analysis, that of long-branch attraction (see below) is particularly pronounced with poor taxon sampling, especially in the four-taxon case. This is a well-understood case in which additional character sampling may not improve the quality of the estimate. As taxa are added, they often break up long branches (especially in the case of fossils), effectively improving the estimation of character state changes along them. Because of the richness of information added by taxon sampling, it is even possible to produce highly accurate estimates of phylogenies with hundreds of taxa using only a few thousand characters. [ citation needed ]
Although many studies have been performed, there is still much work to be done on taxon sampling strategies. Because of advances in computer performance, and the reduced cost and increased automation of molecular sequencing, sample sizes overall are on the rise, and studies addressing the relationships of hundreds of taxa (or other terminal entities, such as genes) are becoming common. Of course, this is not to say that adding characters is not also useful; the number of characters is increasing as well.
Some systematists prefer to exclude taxa based on the number of unknown character entries ("?") they exhibit, or because they tend to "jump around" the tree in analyses (i.e., they are "wildcards"). As noted below, theoretical and simulation work has demonstrated that this is likely to sacrifice accuracy rather than improve it. Although these taxa may generate more most-parsimonious trees (see below), methods such as agreement subtrees and reduced consensus can still extract information on the relationships of interest.
It has been observed that inclusion of more taxa tends to lower overall support values ( bootstrap percentages or decay indices, see below). The cause of this is clear: as additional taxa are added to a tree, they subdivide the branches to which they attach, and thus dilute the information that supports that branch. While support for individual branches is reduced, support for the overall relationships is actually increased. Consider analysis that produces the following tree: (fish, (lizard, (whale, (cat, monkey)))). Adding a rat and a walrus will probably reduce the support for the (whale, (cat, monkey)) clade, because the rat and the walrus may fall within this clade, or outside of the clade, and since these five animals are all relatively closely related, there should be more uncertainty about their relationships. Within error, it may be impossible to determine any of these animals' relationships relative to one another. However, the rat and the walrus will probably add character data that cements the grouping any two of these mammals exclusive of the fish or the lizard; where the initial analysis might have been misled, say, by the presence of fins in the fish and the whale, the presence of the walrus, with blubber and fins like a whale but whiskers like a cat and a rat, firmly ties the whale to the mammals.
To cope with this problem, agreement subtrees , reduced consensus , and double-decay analysis seek to identify supported relationships (in the form of "n-taxon statements," such as the four-taxon statement "(fish, (lizard, (cat, whale)))") rather than whole trees. If the goal of an analysis is a resolved tree, as is the case for comparative phylogenetics , these methods cannot solve the problem. However, if the tree estimate is so poorly supported, the results of any analysis derived from the tree will probably be too suspect to use anyway.
A maximum parsimony analysis runs in a very straightforward fashion. Trees are scored according to the degree to which they imply a parsimonious distribution of the character data. The most parsimonious tree for the dataset represents the preferred hypothesis of relationships among the taxa in the analysis.
Trees are scored (evaluated) by using a simple algorithm to determine how many "steps" (evolutionary transitions) are required to explain the distribution of each character. A step is, in essence, a change from one character state to another, although with ordered characters some transitions require more than one step. Contrary to popular belief, the algorithm does not explicitly assign particular character states to nodes (branch junctions) on a tree: the fewest steps can involve multiple, equally costly assignments and distributions of evolutionary transitions. What is optimized is the total number of changes.
There are many more possible phylogenetic trees than can be searched exhaustively for more than eight taxa or so. A number of algorithms are therefore used to search among the possible trees. Many of these involve taking an initial tree (usually the favored tree from the last iteration of the algorithm), and perturbing it to see if the change produces a higher score.
The trees resulting from parsimony search are unrooted: They show all the possible relationships of the included taxa, but they lack any statement on relative times of divergence. A particular branch is chosen to root the tree by the user. This branch is then taken to be outside all the other branches of the tree, which together form a monophyletic group. This imparts a sense of relative time to the tree. Incorrect choice of a root can result in incorrect relationships on the tree, even if the tree is itself correct in its unrooted form.
Parsimony analysis often returns a number of equally most-parsimonious trees (MPTs). A large number of MPTs is often seen as an analytical failure, and is widely believed to be related to the number of missing entries ("?") in the dataset, characters showing too much homoplasy, or the presence of topologically labile "wildcard" taxa (which may have many missing entries). Numerous methods have been proposed to reduce the number of MPTs, including removing characters or taxa with large amounts of missing data before analysis, removing or downweighting highly homoplastic characters ( successive weighting ) or removing wildcard taxa (the phylogenetic trunk method) a posteriori and then reanalyzing the data.
Numerous theoretical and simulation studies have demonstrated that highly homoplastic characters, characters and taxa with abundant missing data, and "wildcard" taxa contribute to the analysis. Although excluding characters or taxa may appear to improve resolution, the resulting tree is based on less data, and is therefore a less reliable estimate of the phylogeny (unless the characters or taxa are non informative, see safe taxonomic reduction ). Today's general consensus is that having multiple MPTs is a valid analytical result; it simply indicates that there is insufficient data to resolve the tree completely. In many cases, there is substantial common structure in the MPTs, and differences are slight and involve uncertainty in the placement of a few taxa. There are a number of methods for summarizing the relationships within this set, including consensus trees , which show common relationships among all the taxa, and pruned agreement subtrees , which show common structure by temporarily pruning "wildcard" taxa from every tree until they all agree. Reduced consensus takes this one step further, by showing all subtrees (and therefore all relationships) supported by the input trees.
Even if multiple MPTs are returned, parsimony analysis still basically produces a point-estimate, lacking confidence intervals of any sort. This has often been levelled as a criticism, since there is certainly error in estimating the most-parsimonious tree, and the method does not inherently include any means of establishing how sensitive its conclusions are to this error. Several methods have been used to assess support.
Jackknifing and bootstrapping , well-known statistical resampling procedures, have been employed with parsimony analysis. The jackknife, which involves resampling without replacement ("leave-one-out") can be employed on characters or taxa; interpretation may become complicated in the latter case, because the variable of interest is the tree, and comparison of trees with different taxa is not straightforward. The bootstrap, resampling with replacement (sample x items randomly out of a sample of size x, but items can be picked multiple times), is only used on characters, because adding duplicate taxa does not change the result of a parsimony analysis. The bootstrap is much more commonly employed in phylogenetics (as elsewhere); both methods involve an arbitrary but large number of repeated iterations involving perturbation of the original data followed by analysis. The resulting MPTs from each analysis are pooled, and the results are usually presented on a 50% Majority Rule Consensus tree, with individual branches (or nodes) labelled with the percentage of bootstrap MPTs in which they appear. This "bootstrap percentage" (which is not a P-value , as is sometimes claimed) is used as a measure of support. Technically, it is supposed to be a measure of repeatability, the probability that that branch (node, clade) would be recovered if the taxa were sampled again. Experimental tests with viral phylogenies suggest that the bootstrap percentage is not a good estimator of repeatability for phylogenetics, but it is a reasonable estimator of accuracy. [ citation needed ] In fact, it has been shown that the bootstrap percentage, as an estimator of accuracy, is biased, and that this bias results on average in an underestimate of confidence (such that as little as 70% support might really indicate up to 95% confidence). However, the direction of bias cannot be ascertained in individual cases, so assuming that high values bootstrap support indicate even higher confidence is unwarranted.
Another means of assessing support is Bremer support , [ 16 ] [ 17 ] or the decay index which is a parameter of a given data set, rather than an estimate based on pseudoreplicated subsamples, as are the bootstrap and jackknife procedures described above. Bremer support (also known as branch support) is simply the difference in number of steps between the score of the MPT(s), and the score of the most parsimonious tree that does not contain a particular clade (node, branch). It can be thought of as the number of steps you have to add to lose that clade; implicitly, it is meant to suggest how great the error in the estimate of the score of the MPT must be for the clade to no longer be supported by the analysis, although this is not necessarily what it does. Branch support values are often fairly low for modestly-sized data sets (one or two steps being typical), but they often appear to be proportional to bootstrap percentages. As data matrices become larger, branch support values often continue to increase as bootstrap values plateau at 100%. Thus, for large data matrices, branch support values may provide a more informative means to compare support for strongly-supported branches. [ 18 ] However, interpretation of decay values is not straightforward, and they seem to be preferred by authors with philosophical objections to the bootstrap (although many morphological systematists, especially paleontologists, report both). Double-decay analysis is a decay counterpart to reduced consensus that evaluates the decay index for all possible subtree relationships (n-taxon statements) within a tree.
Maximum parsimony is an epistemologically straightforward approach that makes few mechanistic assumptions, and is popular for this reason. However, it may not be statistically consistent under certain circumstances. Consistency, here meaning the monotonic convergence on the correct answer with the addition of more data, is a desirable property of statistical methods . As demonstrated in 1978 by Joe Felsenstein , [ 3 ] maximum parsimony can be inconsistent under certain conditions. The category of situations in which this is known to occur is called long branch attraction , and occurs, for example, where there are long branches (a high level of substitutions) for two characters (A & C), but short branches for another two (B & D). A and B diverged from a common ancestor, as did C and D. Of course, to know that a method is giving you the wrong answer, you would need to know what the correct answer is. This is generally not the case in science. For this reason, some view statistical consistency as irrelevant to empirical phylogenetic questions. [ 19 ]
Assume for simplicity that we are considering a single binary character (it can either be + or -). Because the distance from B to D is small, in the vast majority of all cases, B and D will be the same. Here, we will assume that they are both + (+ and - are assigned arbitrarily and swapping them is only a matter of definition). If this is the case, there are four remaining possibilities. A and C can both be +, in which case all taxa are the same and all the trees have the same length. A can be + and C can be -, in which case only one character is different, and we cannot learn anything, as all trees have the same length. Similarly, A can be - and C can be +. The only remaining possibility is that A and C are both -. In this case, however, the evidence suggests that A and C group together, and B and D together. As a consequence, if the "true tree" is a tree of this type, the more data we collect (i.e. the more characters we study), the more the evidence will support the wrong tree. Of course, except in mathematical simulations, we never know what the "true tree" is. Thus, unless we are able to devise a model that is guaranteed to accurately recover the "true tree," any other optimality criterion or weighting scheme could also, in principle, be statistically inconsistent. The bottom line is, that while statistical inconsistency is an interesting theoretical issue, it is empirically a purely metaphysical concern, outside the realm of empirical testing. Any method could be inconsistent, and there is no way to know for certain whether it is, or not. It is for this reason that many systematists characterize their phylogenetic results as hypotheses of relationship.
Another complication with maximum parsimony, and other optimality-criterion based phylogenetic methods, is that finding the shortest tree is an NP-hard problem. [ 20 ] The only currently available, efficient way of obtaining a solution, given an arbitrarily large set of taxa, is by using heuristic methods which do not guarantee that the shortest tree will be recovered. These methods employ hill-climbing algorithms to progressively approach the best tree. However, it has been shown that there can be "tree islands" of suboptimal solutions, and the analysis can become trapped in these local optima . Thus, complex, flexible heuristics are required to ensure that tree space has been adequately explored. Several heuristics are available, including nearest neighbor interchange (NNI), tree bisection reconnection (TBR), and the parsimony ratchet .
It has been asserted that a major problem, especially for paleontology , is that maximum parsimony assumes that the only way two species can share the same nucleotide at the same position is if they are genetically related. [ citation needed ] This asserts that phylogenetic applications of parsimony assume that all similarity is homologous (other interpretations, such as the assertion that two organisms might not be related at all, are nonsensical). This is emphatically not the case: as with any form of character-based phylogeny estimation, parsimony is used to test the homologous nature of similarities by finding the phylogenetic tree which best accounts for all of the similarities.
It is often stated that parsimony is not relevant to phylogenetic inference because "evolution is not parsimonious." [ citation needed ] In most cases, there is no explicit alternative proposed; if no alternative is available, any statistical method is preferable to none at all. Additionally, it is not clear what would be meant if the statement "evolution is parsimonious" were in fact true. This could be taken to mean that more character changes may have occurred historically than are predicted using the parsimony criterion. Because parsimony phylogeny estimation reconstructs the minimum number of changes necessary to explain a tree, this is quite possible. However, it has been shown through simulation studies, testing with known in vitro viral phylogenies, and congruence with other methods, that the accuracy of parsimony is in most cases not compromised by this. Parsimony analysis uses the number of character changes on trees to choose the best tree, but it does not require that exactly that many changes, and no more, produced the tree. As long as the changes that have not been accounted for are randomly distributed over the tree (a reasonable null expectation), the result should not be biased. In practice, the technique is robust: maximum parsimony exhibits minimal bias as a result of choosing the tree with the fewest changes.
An analogy can be drawn with choosing among contractors based on their initial (nonbinding) estimate of the cost of a job. The actual finished cost is very likely to be higher than the estimate. Despite this, choosing the contractor who furnished the lowest estimate should theoretically result in the lowest final project cost. This is because, in the absence of other data, we would assume that all of the relevant contractors have the same risk of cost overruns. In practice, of course, unscrupulous business practices may bias this result; in phylogenetics, too, some particular phylogenetic problems (for example, long branch attraction , described above) may potentially bias results. In both cases, however, there is no way to tell if the result is going to be biased, or the degree to which it will be biased, based on the estimate itself. With parsimony too, there is no way to tell that the data are positively misleading, without comparison to other evidence.
Parsimony is often characterized as implicitly adopting the position that evolutionary change is rare, or that homoplasy (convergence and reversal) is minimal in evolution. This is not entirely true: parsimony minimizes the number of convergences and reversals that are assumed by the preferred tree, but this may result in a relatively large number of such homoplastic events. It would be more appropriate to say that parsimony assumes only the minimum amount of change implied by the data. As above, this does not require that these were the only changes that occurred; it simply does not infer changes for which there is no evidence. The shorthand for describing this, to paraphrase Farris [ 5 ] is that "parsimony minimizes assumed homoplasies, it does not assume that homoplasy is minimal."
Recent simulation studies suggest that parsimony may be less accurate than trees built using Bayesian approaches for morphological data, [ 21 ] potentially due to overprecision, [ 22 ] although this has been disputed. [ 23 ] Studies using novel simulation methods have demonstrated that differences between inference methods result from the search strategy and consensus method employed, rather than the optimization used. [ 24 ] Also, analyses of 38 molecular and 86 morphological empirical datasets have shown that the common mechanism assumed by the evolutionary models used in model-based phylogenetics apply to most molecular, but few morphological datasets. [ 25 ] This finding validates the use of model-based phylogenetics for molecular data, but suggests that for morphological data, parsimony remains advantageous, at least until more sophisticated models become available for phenotypic data.
There are several other methods for inferring phylogenies based on discrete character data, including maximum likelihood and Bayesian inference . Each offers potential advantages and disadvantages. In practice, these methods tend to favor trees that are very similar to the most parsimonious tree(s) for the same dataset; [ 26 ] however, they allow for complex modelling of evolutionary processes, and as classes of methods are statistically consistent and are not susceptible to long-branch attraction . Note, however, that the performance of likelihood and Bayesian methods are dependent on the quality of the particular model of evolution employed; an incorrect model can produce a biased result - just like parsimony. In addition, they are still quite computationally slow relative to parsimony methods, sometimes requiring weeks to run large datasets. Most of these methods have particularly avid proponents and detractors; parsimony especially has been advocated as philosophically superior (most notably by ardent cladists ). [ citation needed ] One area where parsimony still holds much sway is in the analysis of morphological data, because—until recently—stochastic models of character change were not available for non-molecular data, and they are still not widely implemented. Parsimony has also recently been shown to be more likely to recover the true tree in the face of profound changes in evolutionary ("model") parameters (e.g., the rate of evolutionary change) within a tree. [ 27 ]
Distance matrices can also be used to generate phylogenetic trees. Non-parametric distance methods were originally applied to phenetic data using a matrix of pairwise distances and reconciled to produce a tree . The distance matrix can come from a number of different sources, including immunological distance , morphometric analysis, and genetic distances . For phylogenetic character data, raw distance values can be calculated by simply counting the number of pairwise differences in character states ( Manhattan distance ) or by applying a model of evolution. Notably, distance methods also allow use of data that may not be easily converted to character data, such as DNA-DNA hybridization assays. Today, distance-based methods are often frowned upon because phylogenetically-informative data can be lost when converting characters to distances. There are a number of distance-matrix methods and optimality criteria, of which the minimum evolution criterion is most closely related to maximum parsimony.
From among the distance methods , there exists a phylogenetic estimation criterion, known as Minimum Evolution (ME), that shares with maximum-parsimony the aspect of searching for the phylogeny that has the shortest total sum of branch lengths. [ 28 ] [ 29 ]
A subtle difference distinguishes the maximum-parsimony criterion from the ME criterion: while maximum-parsimony is based on an abductive heuristic, i.e., the plausibility of the simplest evolutionary hypothesis of taxa with respect to the more complex ones, the ME criterion is based on Kidd and Sgaramella-Zonta's conjectures (proven true 22 years later by Rzhetsky and Nei [ 30 ] ) stating that if the evolutionary distances from taxa were unbiased estimates of the true evolutionary distances then the true phylogeny of taxa would have a length shorter than any other alternative phylogeny compatible with those distances. Rzhetsky and Nei's results set the ME criterion free from the Occam's razor principle and confer it a solid theoretical and quantitative basis. [ 31 ] | https://en.wikipedia.org/wiki/Maximum_parsimony_(phylogenetics) |
The maximum power principle or Lotka's principle [ 1 ] has been proposed as the fourth principle of energetics in open system thermodynamics . According to American ecologist Howard T. Odum , "The maximum power principle can be stated: During self-organization, system designs develop and prevail that maximize power intake, energy transformation , and those uses that reinforce production and efficiency." [ 2 ]
Chen (2006) has located the origin of the statement of maximum power as a formal principle in a tentative proposal by Alfred J. Lotka (1922a, b). Lotka's statement sought to explain the Darwinian notion of evolution with reference to a physical principle. Lotka's work was subsequently developed by the systems ecologist Howard T. Odum in collaboration with the chemical engineer Richard C. Pinkerton, and later advanced by the engineer Myron Tribus .
While Lotka's work may have been a first attempt to formalise evolutionary thought in mathematical terms, it followed similar observations made by Leibniz and Volterra and Ludwig Boltzmann , for example, throughout the sometimes controversial history of natural philosophy. In contemporary literature it is most commonly associated with the work of Howard T. Odum.
The significance of Odum's approach was given greater support during the 1970s, amid times of oil crisis , where, as Gilliland (1978, pp. 100) observed, there was an emerging need for a new method of analysing the importance and value of energy resources to economic and environmental production. A field known as energy analysis , itself associated with net energy and EROEI , arose to fulfill this analytic need. However, in energy analysis intractable theoretical and practical difficulties arose when using the energy unit to understand, a) the conversion among concentrated fuel types (or energy types ), b) the contribution of labour, and c) the contribution of the environment.
Lotka said (1922b: 151):
The principle of natural selection reveals itself as capable of yielding information which the first and second laws of thermodynamics are not competent to furnish. The two fundamental laws of thermodynamics are, of course, insufficient to determine the course of events in a physical system. They tell us that certain things cannot happen, but they do not tell us what does happen.
Gilliland noted that these difficulties in analysis in turn required some new theory to adequately explain the interactions and transactions of these different energies (different concentrations of fuels, labour and environmental forces). Gilliland (Gilliland 1978, p. 101) suggested that Odum's statement of the maximum power principle (H.T.Odum 1978, pp. 54–87) was, perhaps, an adequate expression of the requisite theory:
That theory, as it is expressed by the maximum power principle, addresses the empirical question of why systems of any type or size organize themselves into the patterns observed. Such a question assumes that physical laws govern system function. It does not assume, for example, that the system comprising economic production is driven by consumers; rather that the whole cycle of production-consumption is structured and driven by physical laws.
This theory Odum called maximum power theory. In order to formulate maximum power theory Gilliland observed that Odum had added another law (the maximum power principle) to the already well established laws of thermodynamics. In 1978 Gilliland wrote that Odum's new law had not yet been validated (Gilliland 1978, p. 101). Gilliland stated that in maximum power theory the second law efficiency of thermodynamics required an additional physical concept: "the concept of second law efficiency under maximum power" (Gilliland 1978, p. 101):
Neither the first or second law of thermodynamics include a measure of the rate at which energy transformations or processes occur. The concept of maximum power incorporates time into measures of energy transformations. It provides information about the rate at which one kind of energy is transformed into another as well as the efficiency of that transformation.
In this way the concept of maximum power was being used as a principle to quantitatively describe the selective law of biological evolution . Perhaps H.T.Odum's most concise statement of this view was (1970, p. 62):
Lotka provided the theory of natural selection as a maximum power organizer; under competitive conditions systems are selected which use their energies in various structural-developing actions so as to maximize their use of available energies. By this theory systems of cycles which drain less energy lose out in comparative development. However Leopold and Langbein have shown that streams in developing erosion profiles, meander systems, and tributary networks disperse their potential energies more slowly than if their channels were more direct. These two statements might be harmonized by an optimum efficiency maximum power principle (Odum and Pinkerton 1955), which indicates that energies which are converted too rapidly into heat are not made available to the systems own use because they are not fed back through storages into useful pumping, but instead do random stirring of the environment.
The Odum–Pinkerton approach to Lotka's proposal was to apply Ohm's law – and the associated maximum power theorem (a result in electrical power systems) – to ecological systems. Odum and Pinkerton defined "power" in electronic terms as the rate of work , where Work is understood as a " useful energy transformation". The concept of maximum power can therefore be defined as the maximum rate of useful energy transformation . Hence the underlying philosophy aims to unify the theories and associated laws of electronic and thermodynamic systems with biological systems. This approach presupposed an analogical view which sees the world as an ecological-electronic-economic engine.
It has been pointed out by Boltzmann that the fundamental object of contention in the life-struggle, in the evolution of the organic world, is available energy. In accord with this observation is the principle that, in the struggle for existence, the advantage must go to those organisms whose energy-capturing devices are most efficient in directing available energy into channels favorable to the preservation of the species.
Lotka underscored the centrality of available energy in the struggle for survival and evolution. By asserting that organisms with more efficient energy-capturing mechanisms gain an advantage, Lotka essentially aligned with the MPP. The MPP posits that biological systems, like any other complex systems, tend to evolve in ways that maximize their power intake or energy flux. In this context, organisms that effectively capture and utilize energy resources are more likely to thrive and propagate, driving evolutionary processes. Lotka's observation provides an early theoretical foundation for understanding the role of energy dynamics in biological evolution.
...it seems to this author appropriate to unite the biological and physical traditions by giving the Darwinian principle of natural selection the citation as the fourth law of thermodynamics , since it is the controlling principle in rate of heat generation and efficiency settings in irreversible biological processes.
Odum's proposition of linking Darwinian natural selection with the fourth law of thermodynamics is significant. It suggests that the principles governing biological evolution are intimately connected with those of thermodynamics, particularly the concepts of energy flow and entropy production. By considering natural selection as a thermodynamic process, Odum implies that organisms evolve traits and behaviors that enhance their energy capture and utilization, consistent with the MPP. Furthermore, by emphasizing the regulation of heat generation and efficiency in biological processes, Odum highlights the importance of optimizing energy utilization for survival and reproduction, echoing the principles of the MPP.
...it may be time to recognize the maximum power principle as the fourth thermodynamic law as suggested by Lotka.
Odum's advocacy for recognizing the MPP as the fourth thermodynamic law represents a culmination of earlier insights into the relationship between thermodynamics and biology. By elevating the MPP to the status of a fundamental thermodynamic law, Odum underscores its universal applicability across various complex systems, including biological ones. This perspective emphasizes that biological systems, driven by the imperatives of survival and reproduction, tend to evolve in ways that maximize their power output, thereby enhancing their capacity to exploit available energy resources. By acknowledging the MPP as a governing principle, Odum highlights its explanatory power in understanding ecosystem dynamics, population interactions, and evolutionary trajectories.
The maximum power principle can be stated: During self-organization, system designs develop and prevail that maximize power intake, energy transformation, and those uses that reinforce production and efficiency. (H.T. Odum 1995, p. 311)
...the maximum power principle ... states that systems which maximize their flow of energy survive in competition. In other words, rather than merely accepting the fact that more energy per unit of time is transformed in a process which operates at maximum power, this principle says that systems organize and structure themselves naturally to maximize power. Systems regulate themselves according to the maximum power principle. Over time, the systems which maximize power are selected for whereas those that do not are selected against and eventually eliminated. ... Odum argues ... that the free market mechanisms of the economy effectively do the same thing for human systems and that our economic evolution to date is a product of that selection process. (Gilliland 1978, pp. 101–102)
Odum et al. viewed the maximum power theorem as a principle of power-efficiency reciprocity selection with wider application than just electronics. For example, Odum saw it in open systems operating on solar energy, like both photovoltaics and photosynthesis (1963, p. 438). Like the maximum power theorem, Odum's statement of the maximum power principle relies on the notion of 'matching', such that high-quality energy maximizes power by matching and amplifying energy (1994, pp. 262, 541): "in surviving designs a matching of high-quality energy with larger amounts of low-quality energy is likely to occur" (1994, p. 260). As with electronic circuits, the resultant rate of energy transformation will be at a maximum at an intermediate power efficiency. In 2006, T.T. Cai, C.L. Montague and J.S. Davis said that, "The maximum power principle is a potential guide to understanding the patterns and processes of ecosystem development and sustainability. The principle predicts the selective persistence of ecosystem designs that capture a previously untapped energy source." (2006, p. 317). In several texts H.T. Odum gave the Atwood machine as a practical example of the 'principle' of maximum power.
The mathematical definition given by H.T. Odum is formally analogous to the definition provided on the maximum power theorem article. (For a brief explanation of Odum's approach to the relationship between ecology and electronics see Ecological Analog of Ohm's Law )
Whether or not the principle of maximum power efficiency can be considered the fourth law of thermodynamics and the fourth principle of energetics is moot. Nevertheless, H.T. Odum also proposed a corollary of maximum power as the organisational principle of evolution, describing the evolution of microbiological systems, economic systems , planetary systems, and astrophysical systems. He called this corollary the maximum empower principle. This was suggested because, as S.E. Jorgensen, M.T. Brown, H.T. Odum (2004) note,
Maximum power might be misunderstood to mean giving priority to low level processes. ... However, the higher level transformation processes are just as important as the low level processes. ... Therefore, Lotka's principle is clarified by stating it as the principle of self organization for maximum empower .
C. Giannantoni may have confused matters when he wrote "The "Maximum Em-Power Principle" (Lotka–Odum) is generally considered the "Fourth Thermodynamic Principle" (mainly) because of its practical validity for a very wide class of physical and biological systems" (C. Giannantoni 2002, § 13, p. 155). Nevertheless, Giannantoni has proposed the Maximum Em-Power Principle as the fourth principle of thermodynamics (Giannantoni 2006).
The preceding discussion is incomplete. The "maximum power" was discovered several times independently, in physics and engineering, see: Novikov (1957), El-Wakil (1962), and Curzon and Ahlborn (1975). The incorrectness of this analysis and design evolution conclusions was demonstrated by Gyftopoulos (2002). | https://en.wikipedia.org/wiki/Maximum_power_principle |
In electrical engineering , the maximum power transfer theorem states that, to obtain maximum external power from a power source with internal resistance , the resistance of the load must equal the resistance of the source as viewed from its output terminals. Moritz von Jacobi published the maximum power (transfer) theorem around 1840; it is also referred to as " Jacobi's law ". [ 1 ]
The theorem results in maximum power transfer from the power source to the load, but not maximum efficiency of useful power out of total power consumed. If the load resistance is made larger than the source resistance, then efficiency increases (since a higher percentage of the source power is transferred to the load), but the magnitude of the load power decreases (since the total circuit resistance increases). [ 2 ] If the load resistance is made smaller than the source resistance, then efficiency decreases (since most of the power ends up being dissipated in the source). Although the total power dissipated increases (due to a lower total resistance), the amount dissipated in the load decreases.
The theorem states how to choose (so as to maximize power transfer) the load resistance, once the source resistance is given. It is a common misconception to apply the theorem in the opposite scenario. It does not say how to choose the source resistance for a given load resistance. In fact, the source resistance that maximizes power transfer from a voltage source is always zero (the hypothetical ideal voltage source ), regardless of the value of the load resistance.
The theorem can be extended to alternating current circuits that include reactance , and states that maximum power transfer occurs when the load impedance is equal to the complex conjugate of the source impedance.
The mathematics of the theorem also applies to other physical interactions, such as: [ 2 ] [ 3 ]
The theorem was originally misunderstood (notably by Joule [ 4 ] ) to imply that a system consisting of an electric motor driven by a battery could not be more than 50% efficient , since the power dissipated as heat in the battery would always be equal to the power delivered to the motor when the impedances were matched.
In 1880 this assumption was shown to be false by either Edison or his colleague Francis Robbins Upton , who realized that maximum efficiency was not the same as maximum power transfer.
To achieve maximum efficiency, the resistance of the source (whether a battery or a dynamo ) could be (or should be) made as close to zero as possible. Using this new understanding, they obtained an efficiency of about 90%, and proved that the electric motor was a practical alternative to the heat engine .
The efficiency η is the ratio of the power dissipated by the load resistance R L to the total power dissipated by the circuit (which includes the voltage source's resistance of R S as well as R L ):
η = P L P T o t a l = I 2 ⋅ R L I 2 ⋅ ( R L + R S ) = R L R L + R S = 1 1 + R S / R L . {\displaystyle \eta ={\frac {P_{\mathrm {L} }}{P_{\mathrm {Total} }}}={\frac {I^{2}\cdot R_{\mathrm {L} }}{I^{2}\cdot (R_{\mathrm {L} }+R_{\mathrm {S} })}}={\frac {R_{\mathrm {L} }}{R_{\mathrm {L} }+R_{\mathrm {S} }}}={\frac {1}{1+R_{\mathrm {S} }/R_{\mathrm {L} }}}\,.}
Consider three particular cases (note that voltage sources must have some resistance):
A related concept is reflectionless impedance matching .
In radio frequency transmission lines , and other electronics , there is often a requirement to match the source impedance (at the transmitter) to the load impedance (such as an antenna ) to avoid reflections in the transmission line .
In the simplified model of powering a load with resistance R L by a source with voltage V and source resistance R S , then by Ohm's law the resulting current I is simply the source voltage divided by the total circuit resistance: I = V R S + R L . {\displaystyle I={\frac {V}{R_{\mathrm {S} }+R_{\mathrm {L} }}}.}
The power P L dissipated in the load is the square of the current multiplied by the resistance: P L = I 2 R L = ( V R S + R L ) 2 R L = V 2 R S 2 / R L + 2 R S + R L . {\displaystyle P_{\mathrm {L} }=I^{2}R_{\mathrm {L} }=\left({\frac {V}{R_{\mathrm {S} }+R_{\mathrm {L} }}}\right)^{2}R_{\mathrm {L} }={\frac {V^{2}}{R_{\mathrm {S} }^{2}/R_{\mathrm {L} }+2R_{\mathrm {S} }+R_{\mathrm {L} }}}.}
The value of R L for which this expression is a maximum could be calculated by differentiating it, but it is easier to calculate the value of R L for which the denominator: R S 2 / R L + 2 R S + R L {\displaystyle R_{\mathrm {S} }^{2}/R_{\mathrm {L} }+2R_{\mathrm {S} }+R_{\mathrm {L} }} is a minimum. The result will be the same in either case. Differentiating the denominator with respect to R L : d d R L ( R S 2 / R L + 2 R S + R L ) = − R S 2 / R L 2 + 1. {\displaystyle {\frac {d}{dR_{\mathrm {L} }}}\left(R_{\mathrm {S} }^{2}/R_{\mathrm {L} }+2R_{\mathrm {S} }+R_{\mathrm {L} }\right)=-R_{\mathrm {S} }^{2}/R_{\mathrm {L} }^{2}+1.}
For a maximum or minimum, the first derivative is zero, so R S 2 / R L 2 = 1 {\displaystyle R_{\mathrm {S} }^{2}/R_{\mathrm {L} }^{2}=1} or R L = ± R S . {\displaystyle R_{\mathrm {L} }=\pm R_{\mathrm {S} }.}
In practical resistive circuits, R S and R L are both positive, so the positive sign in the above is the correct solution.
To find out whether this solution is a minimum or a maximum, the denominator expression is differentiated again:
d 2 d R L 2 ( R S 2 / R L + 2 R S + R L ) = 2 R S 2 / R L 3 . {\displaystyle {\frac {d^{2}}{dR_{\mathrm {L} }^{2}}}\left({R_{\mathrm {S} }^{2}/R_{\mathrm {L} }+2R_{\mathrm {S} }+R_{\mathrm {L} }}\right)={2R_{\mathrm {S} }^{2}}/{R_{\mathrm {L} }^{3}}.}
This is always positive for positive values of R S {\displaystyle R_{\mathrm {S} }} and R L {\displaystyle R_{\mathrm {L} }} , showing that the denominator is a minimum, and the power is therefore a maximum, when: R S = R L . {\displaystyle R_{\mathrm {S} }=R_{\mathrm {L} }.}
The above proof assumes fixed source resistance R S {\displaystyle R_{\mathrm {S} }} . When the source resistance can be varied, power transferred to the load can be increased by reducing R S {\displaystyle R_{\textrm {S}}} . For example, a 100 Volt source with an R S {\displaystyle R_{\textrm {S}}} of 10 Ω {\displaystyle 10\,\Omega } will deliver 250 watts of power to a 10 Ω {\displaystyle 10\,\Omega } load; reducing R S {\displaystyle R_{\textrm {S}}} to 0 Ω {\displaystyle 0\,\Omega } increases the power delivered to 1000 watts.
Note that this shows that maximum power transfer can also be interpreted as the load voltage being equal to one-half of the Thevenin voltage equivalent of the source. [ 5 ]
The power transfer theorem also applies when the source and/or load are not purely resistive.
A refinement of the maximum power theorem says that any reactive components of source and load should be of equal magnitude but opposite sign. ( See below for a derivation. )
Physically realizable sources and loads are not usually purely resistive, having some inductive or capacitive components, and so practical applications of this theorem, under the name of complex conjugate impedance matching, do, in fact, exist.
If the source is totally inductive (capacitive), then a totally capacitive (inductive) load, in the absence of resistive losses, would receive 100% of the energy from the source but send it back after a quarter cycle.
The resultant circuit is nothing other than a resonant LC circuit in which the energy continues to oscillate to and fro. This oscillation is called reactive power .
Power factor correction (where an inductive reactance is used to "balance out" a capacitive one), is essentially the same idea as complex conjugate impedance matching although it is done for entirely different reasons.
For a fixed reactive source , the maximum power theorem maximizes the real power (P) delivered to the load by complex conjugate matching the load to the source.
For a fixed reactive load , power factor correction minimizes the apparent power (S) (and unnecessary current) conducted by the transmission lines, while maintaining the same amount of real power transfer.
This is done by adding a reactance to the load to balance out the load's own reactance, changing the reactive load impedance into a resistive load impedance.
In this diagram, AC power is being transferred from the source, with phasor magnitude of voltage | V S | {\displaystyle |V_{\text{S}}|} (positive peak voltage) and fixed source impedance Z S {\displaystyle Z_{\text{S}}} (S for source), to a load with impedance Z L {\displaystyle Z_{\text{L}}} (L for load), resulting in a (positive) magnitude | I | {\displaystyle |I|} of the current phasor I {\displaystyle I} . This magnitude | I | {\displaystyle |I|} results from dividing the magnitude of the source voltage by the magnitude of the total circuit impedance: | I | = | V S | | Z S + Z L | . {\displaystyle |I|={|V_{\text{S}}| \over |Z_{\text{S}}+Z_{\text{L}}|}.}
The average power P L {\displaystyle P_{\text{L}}} dissipated in the load is the square of the current multiplied by the resistive portion (the real part) R L {\displaystyle R_{\text{L}}} of the load impedance Z L {\displaystyle Z_{\text{L}}} : P L = I rms 2 R L = 1 2 | I | 2 R L = 1 2 ( | V S | | Z S + Z L | ) 2 R L = 1 2 | V S | 2 R L ( R S + R L ) 2 + ( X S + X L ) 2 , {\displaystyle {\begin{aligned}P_{\text{L}}&=I_{\text{rms}}^{2}R_{\text{L}}={1 \over 2}|I|^{2}R_{\text{L}}\\&={1 \over 2}\left({|V_{\text{S}}| \over |Z_{\text{S}}+Z_{\text{L}}|}\right)^{2}R_{\text{L}}={1 \over 2}{|V_{\text{S}}|^{2}R_{\text{L}} \over (R_{\text{S}}+R_{\text{L}})^{2}+(X_{\text{S}}+X_{\text{L}})^{2}},\end{aligned}}} where R S {\displaystyle R_{\text{S}}} and R L {\displaystyle R_{\text{L}}} denote the resistances, that is the real parts, and X S {\displaystyle X_{\text{S}}} and X L {\displaystyle X_{\text{L}}} denote the reactances, that is the imaginary parts, of respectively the source and load impedances Z S {\displaystyle Z_{\text{S}}} and Z L {\displaystyle Z_{\text{L}}} .
To determine, for a given source, the voltage V S {\displaystyle V_{\text{S}}} and the impedance Z S , {\displaystyle Z_{\text{S}},} the value of the load impedance Z L , {\displaystyle Z_{\text{L}},} for which this expression for the power yields a maximum, one first finds, for each fixed positive value of R L {\displaystyle R_{\text{L}}} , the value of the reactive term X L {\displaystyle X_{\text{L}}} for which the denominator: ( R S + R L ) 2 + ( X S + X L ) 2 {\displaystyle (R_{\text{S}}+R_{\text{L}})^{2}+(X_{\text{S}}+X_{\text{L}})^{2}} is a minimum. Since reactances can be negative, this is achieved by adapting the load reactance to: X L = − X S . {\displaystyle X_{\text{L}}=-X_{\text{S}}.}
This reduces the above equation to: P L = 1 2 | V S | 2 R L ( R S + R L ) 2 {\displaystyle P_{\text{L}}={\frac {1}{2}}{\frac {|V_{\text{S}}|^{2}R_{\text{L}}}{(R_{\text{S}}+R_{\text{L}})^{2}}}} and it remains to find the value of R L {\displaystyle R_{\text{L}}} which maximizes this expression. This problem has the same form as in the purely resistive case, and the maximizing condition therefore is R L = R S . {\displaystyle R_{\text{L}}=R_{\text{S}}.}
The two maximizing conditions:
describe the complex conjugate of the source impedance, denoted by ∗ , {\displaystyle ^{*},} and thus can be concisely combined to: Z L = Z S ∗ . {\displaystyle Z_{\text{L}}=Z_{\text{S}}^{*}.} | https://en.wikipedia.org/wiki/Maximum_power_transfer_theorem |
In the mathematical fields of differential equations and geometric analysis , the maximum principle is one of the most useful and best known tools of study. Solutions of a differential inequality in a domain D satisfy the maximum principle if they achieve their maxima at the boundary of D .
The maximum principle enables one to obtain information about solutions of differential equations without any explicit knowledge of the solutions themselves. In particular, the maximum principle is a useful tool in the numerical approximation of solutions of ordinary and partial differential equations and in the determination of bounds for the errors in such approximations. [ 1 ]
In a simple two-dimensional case, consider a function of two variables u ( x , y ) such that
The weak maximum principle , in this setting, says that for any open precompact subset M of the domain of u , the maximum of u on the closure of M is achieved on the boundary of M . The strong maximum principle says that, unless u is a constant function, the maximum cannot also be achieved anywhere on M itself.
Such statements give a striking qualitative picture of solutions of the given differential equation. Such a qualitative picture can be extended to many kinds of differential equations. In many situations, one can also use such maximum principles to draw precise quantitative conclusions about solutions of differential equations, such as control over the size of their gradient . There is no single or most general maximum principle which applies to all situations at once.
In the field of convex optimization , there is an analogous statement which asserts that the maximum of a convex function on a compact convex set is attained on the boundary . [ 2 ]
Here we consider the simplest case, although the same thinking can be extended to more general scenarios. Let M be an open subset of Euclidean space and let u be a C 2 function on M such that
where for each i and j between 1 and n , a ij is a function on M with a ij = a ji .
Fix some choice of x in M . According to the spectral theorem of linear algebra, all eigenvalues of the matrix [ a ij ( x )] are real, and there is an orthonormal basis of ℝ n consisting of eigenvectors. Denote the eigenvalues by λ i and the corresponding eigenvectors by v i , for i from 1 to n . Then the differential equation, at the point x , can be rephrased as
The essence of the maximum principle is the simple observation that if each eigenvalue is positive (which amounts to a certain formulation of "ellipticity" of the differential equation) then the above equation imposes a certain balancing of the directional second derivatives of the solution. In particular, if one of the directional second derivatives is negative, then another must be positive. At a hypothetical point where u is maximized, all directional second derivatives are automatically nonpositive, and the "balancing" represented by the above equation then requires all directional second derivatives to be identically zero.
This elementary reasoning could be argued to represent an infinitesimal formulation of the strong maximum principle, which states, under some extra assumptions (such as the continuity of a ), that u must be constant if there is a point of M where u is maximized.
Note that the above reasoning is unaffected if one considers the more general partial differential equation
since the added term is automatically zero at any hypothetical maximum point. The reasoning is also unaffected if one considers the more general condition
in which one can even note the extra phenomena of having an outright contradiction if there is a strict inequality ( > rather than ≥ ) in this condition at the hypothetical maximum point. This phenomenon is important in the formal proof of the classical weak maximum principle.
However, the above reasoning no longer applies if one considers the condition
since now the "balancing" condition, as evaluated at a hypothetical maximum point of u , only says that a weighted average of manifestly nonpositive quantities is nonpositive. This is trivially true, and so one cannot draw any nontrivial conclusion from it. This is reflected by any number of concrete examples, such as the fact that
and on any open region containing the origin, the function − x 2 − y 2 certainly has a maximum.
Let M denote an open subset of Euclidean space. If a smooth function u : M → R {\displaystyle u:M\to \mathbb {R} } is maximized at a point p , then one automatically has:
One can view a partial differential equation as the imposition of an algebraic relation between the various derivatives of a function. So, if u is the solution of a partial differential equation, then it is possible that the above conditions on the first and second derivatives of u form a contradiction to this algebraic relation. This is the essence of the maximum principle. Clearly, the applicability of this idea depends strongly on the particular partial differential equation in question.
For instance, if u solves the differential equation
then it is clearly impossible to have Δ u ≤ 0 {\displaystyle \Delta u\leq 0} and d u = 0 {\displaystyle du=0} at any point of the domain. So, following the above observation, it is impossible for u to take on a maximum value. If, instead u solved the differential equation Δ u = | d u | 2 {\displaystyle \Delta u=|du|^{2}} then one would not have such a contradiction, and the analysis given so far does not imply anything interesting. If u solved the differential equation Δ u = | d u | 2 − 2 , {\displaystyle \Delta u=|du|^{2}-2,} then the same analysis would show that u cannot take on a minimum value.
The possibility of such analysis is not even limited to partial differential equations. For instance, if u : M → R {\displaystyle u:M\to \mathbb {R} } is a function such that
which is a sort of "non-local" differential equation, then the automatic strict positivity of the right-hand side shows, by the same analysis as above, that u cannot attain a maximum value.
There are many methods to extend the applicability of this kind of analysis in various ways. For instance, if u is a harmonic function, then the above sort of contradiction does not directly occur, since the existence of a point p where Δ u ( p ) ≤ 0 {\displaystyle \Delta u(p)\leq 0} is not in contradiction to the requirement Δ u = 0 {\displaystyle \Delta u=0} everywhere. However, one could consider, for an arbitrary real number s , the function u s defined by
It is straightforward to see that
By the above analysis, if s > 0 {\displaystyle s>0} then u s cannot attain a maximum value. One might wish to consider the limit as s to 0 in order to conclude that u also cannot attain a maximum value. However, it is possible for the pointwise limit of a sequence of functions without maxima to have a maxima. Nonetheless, if M has a boundary such that M together with its boundary is compact, then supposing that u can be continuously extended to the boundary, it follows immediately that both u and u s attain a maximum value on M ∪ ∂ M . {\displaystyle M\cup \partial M.} Since we have shown that u s , as a function on M , does not have a maximum, it follows that the maximum point of u s , for any s , is on ∂ M . {\displaystyle \partial M.} By the sequential compactness of ∂ M , {\displaystyle \partial M,} it follows that the maximum of u is attained on ∂ M . {\displaystyle \partial M.} This is the weak maximum principle for harmonic functions. This does not, by itself, rule out the possibility that the maximum of u is also attained somewhere on M . That is the content of the "strong maximum principle," which requires further analysis.
The use of the specific function e x 1 {\displaystyle e^{x_{1}}} above was very inessential. All that mattered was to have a function which extends continuously to the boundary and whose Laplacian is strictly positive. So we could have used, for instance,
with the same effect.
Let M be an open subset of Euclidean space. Let u : M → R {\displaystyle u:M\to \mathbb {R} } be a twice-differentiable function which attains its maximum value C . Suppose that
Suppose that one can find (or prove the existence of):
Then L ( u + h − C ) ≥ 0 on Ω with u + h − C ≤ 0 on the boundary of Ω ; according to the weak maximum principle, one has u + h − C ≤ 0 on Ω . This can be reorganized to say
for all x in Ω . If one can make the choice of h so that the right-hand side has a manifestly positive nature, then this will provide a contradiction to the fact that x 0 is a maximum point of u on M , so that its gradient must vanish.
The above "program" can be carried out. Choose Ω to be a spherical annulus; one selects its center x c to be a point closer to the closed set u −1 ( C ) than to the closed set ∂ M , and the outer radius R is selected to be the distance from this center to u −1 ( C ) ; let x 0 be a point on this latter set which realizes the distance. The inner radius ρ is arbitrary. Define
Now the boundary of Ω consists of two spheres; on the outer sphere, one has h = 0 ; due to the selection of R , one has u ≤ C on this sphere, and so u + h − C ≤ 0 holds on this part of the boundary, together with the requirement h ( x 0 ) = 0 . On the inner sphere, one has u < C . Due to the continuity of u and the compactness of the inner sphere, one can select δ > 0 such that u + δ < C . Since h is constant on this inner sphere, one can select ε > 0 such that u + h ≤ C on the inner sphere, and hence on the entire boundary of Ω .
Direct calculation shows
There are various conditions under which the right-hand side can be guaranteed to be nonnegative; see the statement of the theorem below.
Lastly, note that the directional derivative of h at x 0 along the inward-pointing radial line of the annulus is strictly positive. As described in the above summary, this will ensure that a directional derivative of u at x 0 is nonzero, in contradiction to x 0 being a maximum point of u on the open set M .
The following is the statement of the theorem in the books of Morrey and Smoller, following the original statement of Hopf (1927):
Let M be an open subset of Euclidean space ℝ n . For each i and j between 1 and n , let a ij and b i be continuous functions on M with a ij = a ji . Suppose that for all x in M , the symmetric matrix [ a ij ] is positive-definite. If u is a nonconstant C 2 function on M such that
on M , then u does not attain a maximum value on M .
The point of the continuity assumption is that continuous functions are bounded on compact sets, the relevant compact set here being the spherical annulus appearing in the proof. Furthermore, by the same principle, there is a number λ such that for all x in the annulus, the matrix [ a ij ( x )] has all eigenvalues greater than or equal to λ . One then takes α , as appearing in the proof, to be large relative to these bounds. Evans's book has a slightly weaker formulation, in which there is assumed to be a positive number λ which is a lower bound of the eigenvalues of [ a ij ] for all x in M .
These continuity assumptions are clearly not the most general possible in order for the proof to work. For instance, the following is Gilbarg and Trudinger's statement of the theorem, following the same proof:
Let M be an open subset of Euclidean space ℝ n . For each i and j between 1 and n , let a ij and b i be functions on M with a ij = a ji . Suppose that for all x in M , the symmetric matrix [ a ij ] is positive-definite, and let λ(x) denote its smallest eigenvalue. Suppose that a i i λ {\displaystyle \textstyle {\frac {a_{ii}}{\lambda }}} and | b i | λ {\displaystyle \textstyle {\frac {|b_{i}|}{\lambda }}} are bounded functions on M for each i between 1 and n . If u is a nonconstant C 2 function on M such that
on M , then u does not attain a maximum value on M .
One cannot naively extend these statements to the general second-order linear elliptic equation, as already seen in the one-dimensional case. For instance, the ordinary differential equation y ″ + 2 y = 0 has sinusoidal solutions, which certainly have interior maxima. This extends to the higher-dimensional case, where one often has solutions to "eigenfunction" equations Δ u + cu = 0 which have interior maxima. The sign of c is relevant, as also seen in the one-dimensional case; for instance the solutions to y ″ - 2 y = 0 are exponentials, and the character of the maxima of such functions is quite different from that of sinusoidal functions. | https://en.wikipedia.org/wiki/Maximum_principle |
The maximum residue limit (also maximum residue level , MRL ) is the maximum amount of pesticide residue that is expected to remain on food products when a pesticide is used according to label directions, that will not be a concern to human health. [ 1 ] [ 2 ]
The MRL is usually determined by repeated (on the order of 10) field trials, where the crop has been treated according to good agricultural practice (GAP) and an appropriate pre harvest interval or withholding period has elapsed. For many pesticides this is set at the limit of determination (LOD) – since only major pesticides have been evaluated and understanding of acceptable daily intake (ADI) is incomplete (i.e. producers or public bodies have not submitted MRL data – often because these were not required in the past). LOD can be considered a measure of presence/absence, but certain residues may not be quantifiable at very low levels. For this reason the limit of quantification (LOQ) is often used instead of the LOD. As a rule of thumb the LOQ is approximately two times the LOD. For substances that are not included in any of the annexes in EU regulations, a default MRL of 0.01 mg/kg normally applies. [ citation needed ]
It follows that adoption of GAP at the farm level must be a priority, and includes the withdrawal of obsolete pesticides. With increasingly sensitive detection equipment, a certain amount of pesticide residue will often be measured following field use. In the current regulatory environment, it would be wise for cocoa producers to focus only on pest control agents that are permitted for use in the EU and US. It should be stressed that MRLs are set on the basis of observations and not on ADIs. [ citation needed ]
If MRL of some medicinal plant is not known it is calculated by the formula: [ 3 ]
where SF is the safety factor
In some cases in the EU MRL's are also used for ornamental produce, and checked against MRL's for food crops. While this is a sound approach for the general environmental impact, it doesn't reflect potential exposure of people handling ornamentals. A swap test can eliminate this gap. MRL's for ornamental produce can sometimes result in a conflicting outcome because of the absence of pre harvest intervals (PHI) or withholding periods for ornamentals, specifically in crops where harvesting is continuous, like roses. This happens when a grower is following the label recommendations and the produce is sampled shortly after. [ citation needed ]
Three key points are taken into consideration regarding MRL values in the EU regulation: [ 4 ] 1) the amounts of residues found in food must be safe for consumers and must be as low as possible,
2) the European Commission fixes MRLs for all food and animal feed, and 3) the MRLs for all crops and all pesticides can be found in the MRL database on the Commission website.
This environment -related article is a stub . You can help Wikipedia by expanding it .
This agriculture article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Maximum_residue_limit |
The maximum safe storage temperature ( MSST ) is the highest temperature to store a chemical (like an organic peroxide ) above which slow decomposition and explosion may occur.
This explosives -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Maximum_safe_storage_temperature |
The maximum-term method is a consequence of the large numbers encountered in statistical mechanics . It states that under appropriate conditions the logarithm of a summation is essentially equal to the logarithm of the maximum term in the summation.
These conditions are (see also proof below) that (1) the number of terms in the sum is large and (2) the terms themselves scale exponentially with this number. A typical application is the calculation of a thermodynamic potential from a partition function . These functions often contain terms with factorials n ! {\displaystyle n!} which scale as n 1 / 2 n n / e n {\displaystyle n^{1/2}n^{n}/e^{n}} ( Stirling's approximation ).
Consider the sum
where T N {\displaystyle T_{N}} >0 for all N . Since all the terms are positive, the value of S must be greater than the value of the largest term, T max {\displaystyle T_{\max }} , and less than the product of the number of terms and the value of the largest term. So we have
Taking logarithm gives
As frequently happens in statistical mechanics, we assume that T max {\displaystyle T_{\max }} will be O ( ln M ! ) = O ( e M ) {\displaystyle O(\ln M!)=O(e^{M})} : see Big O notation .
Here we have
For large M , ln M {\displaystyle \ln M} is negligible with respect to M itself, and so ln M / O ( e M ) ∈ o ( 1 ) {\displaystyle \ln M/O(e^{M})\in o(1)} . Then, we can see that ln S is bounded from above and below by ln T max {\displaystyle \ln T_{\max }} , and so
This article about statistical mechanics is a stub . You can help Wikipedia by expanding it .
This physical chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Maximum_term_method |
The maximum theorem provides conditions for the continuity of an optimized function and the set of its maximizers with respect to its parameters. The statement was first proven by Claude Berge in 1959. [ 1 ] The theorem is primarily used in mathematical economics and optimal control .
Maximum Theorem . [ 2 ] [ 3 ] [ 4 ] [ 5 ] Let X {\displaystyle X} and Θ {\displaystyle \Theta } be topological spaces, f : X × Θ → R {\displaystyle f:X\times \Theta \to \mathbb {R} } be a continuous function on the product X × Θ {\displaystyle X\times \Theta } , and C : Θ ⇉ X {\displaystyle C:\Theta \rightrightarrows X} be a compact-valued correspondence such that C ( θ ) ≠ ∅ {\displaystyle C(\theta )\neq \emptyset } for all θ ∈ Θ {\displaystyle \theta \in \Theta } . Define the marginal function (or value function ) f ∗ : Θ → R {\displaystyle f^{*}:\Theta \to \mathbb {R} } by
and the set of maximizers C ∗ : Θ ⇉ X {\displaystyle C^{*}:\Theta \rightrightarrows X} by
If C {\displaystyle C} is continuous (i.e. both upper and lower hemicontinuous ) at θ {\displaystyle \theta } , then the value function f ∗ {\displaystyle f^{*}} is continuous, and the set of maximizers C ∗ {\displaystyle C^{*}} is upper-hemicontinuous with nonempty and compact values. As a consequence, the sup {\displaystyle \sup } may be replaced by max {\displaystyle \max } .
The maximum theorem can be used for minimization by considering the function − f {\displaystyle -f} instead.
The theorem is typically interpreted as providing conditions for a parametric optimization problem to have continuous solutions with regard to the parameter. In this case, Θ {\displaystyle \Theta } is the parameter space, f ( x , θ ) {\displaystyle f(x,\theta )} is the function to be maximized, and C ( θ ) {\displaystyle C(\theta )} gives the constraint set that f {\displaystyle f} is maximized over. Then, f ∗ ( θ ) {\displaystyle f^{*}(\theta )} is the maximized value of the function and C ∗ {\displaystyle C^{*}} is the set of points that maximize f {\displaystyle f} .
The result is that if the elements of an optimization problem are sufficiently continuous, then some, but not all, of that continuity is preserved in the solutions.
Throughout this proof we will use the term neighborhood to refer to an open set containing a particular point. We preface with a preliminary lemma, which is a general fact in the calculus of correspondences. Recall that a correspondence is closed if its graph is closed.
Lemma . [ 6 ] [ 7 ] [ 8 ] If A , B : Θ ⇉ X {\displaystyle A,B:\Theta \rightrightarrows X} are correspondences, A {\displaystyle A} is upper hemicontinuous and compact-valued, and B {\displaystyle B} is closed, then A ∩ B : Θ ⇉ X {\displaystyle A\cap B:\Theta \rightrightarrows X} defined by ( A ∩ B ) ( θ ) = A ( θ ) ∩ B ( θ ) {\displaystyle (A\cap B)(\theta )=A(\theta )\cap B(\theta )} is upper hemicontinuous.
Let θ ∈ Θ {\displaystyle \theta \in \Theta } , and suppose G {\displaystyle G} is an open set containing ( A ∩ B ) ( θ ) {\displaystyle (A\cap B)(\theta )} . If A ( θ ) ⊆ G {\displaystyle A(\theta )\subseteq G} , then the result follows immediately. Otherwise, observe that for each x ∈ A ( θ ) ∖ G {\displaystyle x\in A(\theta )\setminus G} we have x ∉ B ( θ ) {\displaystyle x\notin B(\theta )} , and since B {\displaystyle B} is closed there is a neighborhood U x × V x {\displaystyle U_{x}\times V_{x}} of ( θ , x ) {\displaystyle (\theta ,x)} in which x ′ ∉ B ( θ ′ ) {\displaystyle x'\notin B(\theta ')} whenever ( θ ′ , x ′ ) ∈ U x × V x {\displaystyle (\theta ',x')\in U_{x}\times V_{x}} . The collection of sets { G } ∪ { V x : x ∈ A ( θ ) ∖ G } {\displaystyle \{G\}\cup \{V_{x}:x\in A(\theta )\setminus G\}} forms an open cover of the compact set A ( θ ) {\displaystyle A(\theta )} , which allows us to extract a finite subcover G , V x 1 , … , V x n {\displaystyle G,V_{x_{1}},\dots ,V_{x_{n}}} . By upper hemicontinuity, there is a neighborhood U θ {\displaystyle U_{\theta }} of θ {\displaystyle \theta } such that A ( U θ ) ⊆ G ∪ V x 1 ∪ ⋯ ∪ V x n {\displaystyle A(U_{\theta })\subseteq G\cup V_{x_{1}}\cup \dots \cup V_{x_{n}}} . Then whenever θ ′ ∈ U θ ∩ U x 1 ∩ ⋯ ∩ U x n {\displaystyle \theta '\in U_{\theta }\cap U_{x_{1}}\cap \dots \cap U_{x_{n}}} , we have A ( θ ′ ) ⊆ G ∪ V x 1 ∪ ⋯ ∪ V x n {\displaystyle A(\theta ')\subseteq G\cup V_{x_{1}}\cup \dots \cup V_{x_{n}}} , and so ( A ∩ B ) ( θ ′ ) ⊆ G {\displaystyle (A\cap B)(\theta ')\subseteq G} . This completes the proof. ◻ {\displaystyle \square }
The continuity of f ∗ {\displaystyle f^{*}} in the maximum theorem is the result of combining two independent theorems together.
Theorem 1 . [ 9 ] [ 10 ] [ 11 ] If f {\displaystyle f} is upper semicontinuous and C {\displaystyle C} is upper hemicontinuous, nonempty and compact-valued, then f ∗ {\displaystyle f^{*}} is upper semicontinuous.
Fix θ ∈ Θ {\displaystyle \theta \in \Theta } , and let ε > 0 {\displaystyle \varepsilon >0} be arbitrary. For each x ∈ C ( θ ) {\displaystyle x\in C(\theta )} , there exists a neighborhood U x × V x {\displaystyle U_{x}\times V_{x}} of ( θ , x ) {\displaystyle (\theta ,x)} such that whenever ( θ ′ , x ′ ) ∈ U x × V x {\displaystyle (\theta ',x')\in U_{x}\times V_{x}} , we have f ( x ′ , θ ′ ) < f ( x , θ ) + ε {\displaystyle f(x',\theta ')<f(x,\theta )+\varepsilon } . The set of neighborhoods { V x : x ∈ C ( θ ) } {\displaystyle \{V_{x}:x\in C(\theta )\}} covers C ( θ ) {\displaystyle C(\theta )} , which is compact, so V x 1 , … , V x n {\displaystyle V_{x_{1}},\dots ,V_{x_{n}}} suffice. Furthermore, since C {\displaystyle C} is upper hemicontinuous, there exists a neighborhood U ′ {\displaystyle U'} of θ {\displaystyle \theta } such that whenever θ ′ ∈ U ′ {\displaystyle \theta '\in U'} it follows that C ( θ ′ ) ⊆ ⋃ k = 1 n V x k {\displaystyle C(\theta ')\subseteq \bigcup _{k=1}^{n}V_{x_{k}}} . Let U = U ′ ∩ U x 1 ∩ ⋯ ∩ U x n {\displaystyle U=U'\cap U_{x_{1}}\cap \dots \cap U_{x_{n}}} . Then for all θ ′ ∈ U {\displaystyle \theta '\in U} , we have f ( x ′ , θ ′ ) < f ( x k , θ ) + ε {\displaystyle f(x',\theta ')<f(x_{k},\theta )+\varepsilon } for each x ′ ∈ C ( θ ′ ) {\displaystyle x'\in C(\theta ')} , as x ′ ∈ V x k {\displaystyle x'\in V_{x_{k}}} for some k {\displaystyle k} . It follows that
which was desired. ◻ {\displaystyle \square }
Theorem 2 . [ 12 ] [ 13 ] [ 14 ] If f {\displaystyle f} is lower semicontinuous and C {\displaystyle C} is lower hemicontinuous, then f ∗ {\displaystyle f^{*}} is lower semicontinuous.
Fix θ ∈ Θ {\displaystyle \theta \in \Theta } , and let ε > 0 {\displaystyle \varepsilon >0} be arbitrary.
By definition of f ∗ {\displaystyle f^{*}} , there exists x ∈ C ( θ ) {\displaystyle x\in C(\theta )} such that f ∗ ( θ ) < f ( x , θ ) + ε 2 {\displaystyle f^{*}(\theta )<f(x,\theta )+{\frac {\varepsilon }{2}}} .
Now, since f {\displaystyle f} is lower semicontinuous, there exists a neighborhood U 1 × V {\displaystyle U_{1}\times V} of ( θ , x ) {\displaystyle (\theta ,x)} such that whenever ( θ ′ , x ′ ) ∈ U 1 × V {\displaystyle (\theta ',x')\in U_{1}\times V} we have f ( x , θ ) < f ( x ′ , θ ′ ) + ε 2 {\displaystyle f(x,\theta )<f(x',\theta ')+{\frac {\varepsilon }{2}}} . Observe that C ( θ ) ∩ V ≠ ∅ {\displaystyle C(\theta )\cap V\neq \emptyset } (in particular, x ∈ C ( θ ) ∩ V {\displaystyle x\in C(\theta )\cap V} ). Therefore, since C {\displaystyle C} is lower hemicontinuous, there exists a neighborhood U 2 {\displaystyle U_{2}} such that whenever θ ′ ∈ U 2 {\displaystyle \theta '\in U_{2}} there exists x ′ ∈ C ( θ ′ ) ∩ V {\displaystyle x'\in C(\theta ')\cap V} .
Let U = U 1 ∩ U 2 {\displaystyle U=U_{1}\cap U_{2}} .
Then whenever θ ′ ∈ U {\displaystyle \theta '\in U} there exists x ′ ∈ C ( θ ′ ) ∩ V {\displaystyle x'\in C(\theta ')\cap V} , which implies
which was desired. ◻ {\displaystyle \square }
Under the hypotheses of the Maximum theorem, f ∗ {\displaystyle f^{*}} is continuous. It remains to verify that C ∗ {\displaystyle C^{*}} is an upper hemicontinuous correspondence with compact values. Let θ ∈ Θ {\displaystyle \theta \in \Theta } . To see that C ∗ ( θ ) {\displaystyle C^{*}(\theta )} is nonempty, observe that the function f θ : C ( θ ) → R {\displaystyle f_{\theta }:C(\theta )\to \mathbb {R} } by f θ ( x ) = f ( x , θ ) {\displaystyle f_{\theta }(x)=f(x,\theta )} is continuous on the compact set C ( θ ) {\displaystyle C(\theta )} . The Extreme Value theorem implies that C ∗ ( θ ) {\displaystyle C^{*}(\theta )} is nonempty. In addition, since f θ {\displaystyle f_{\theta }} is continuous, it follows that C ∗ ( θ ) {\displaystyle C^{*}(\theta )} a closed subset of the compact set C ( θ ) {\displaystyle C(\theta )} , which implies C ∗ ( θ ) {\displaystyle C^{*}(\theta )} is compact. Finally, let D : Θ ⇉ X {\displaystyle D:\Theta \rightrightarrows X} be defined by D ( θ ) = { x ∈ X : f ( x , θ ) = f ∗ ( θ ) } {\textstyle D(\theta )=\{x\in X:f(x,\theta )=f^{*}(\theta )\}} . Since f {\displaystyle f} is a continuous function, D {\displaystyle D} is a closed correspondence. Moreover, since C ∗ ( θ ) = C ( θ ) ∩ D ( θ ) {\displaystyle C^{*}(\theta )=C(\theta )\cap D(\theta )} , the preliminary Lemma implies that C ∗ {\displaystyle C^{*}} is upper hemicontinuous. ◻ {\displaystyle \square }
A natural generalization from the above results gives sufficient local conditions for f ∗ {\displaystyle f^{*}} to be continuous and C ∗ {\displaystyle C^{*}} to be nonempty, compact-valued, and upper semi-continuous.
If in addition to the conditions above, f {\displaystyle f} is quasiconcave in x {\displaystyle x} for each θ {\displaystyle \theta } and C {\displaystyle C} is convex-valued, then C ∗ {\displaystyle C^{*}} is also convex-valued. If f {\displaystyle f} is strictly quasiconcave in x {\displaystyle x} for each θ {\displaystyle \theta } and C {\displaystyle C} is convex-valued, then C ∗ {\displaystyle C^{*}} is single-valued, and thus is a continuous function rather than a correspondence. [ 15 ]
If f {\displaystyle f} is concave in X × Θ {\displaystyle X\times \Theta } and C {\displaystyle C} has a convex graph, then f ∗ {\displaystyle f^{*}} is concave and C ∗ {\displaystyle C^{*}} is convex-valued. Similarly to above, if f {\displaystyle f} is strictly concave, then C ∗ {\displaystyle C^{*}} is a continuous function. [ 15 ]
It is also possible to generalize Berge's theorem to non-compact correspondences if the objective function is K-inf-compact. [ 16 ]
Consider a utility maximization problem where a consumer makes a choice from their budget set. Translating from the notation above to the standard consumer theory notation,
Then,
Proofs in general equilibrium theory often apply the Brouwer or Kakutani fixed-point theorems to the consumer's demand, which require compactness and continuity, and the maximum theorem provides the sufficient conditions to do so. | https://en.wikipedia.org/wiki/Maximum_theorem |
Maxwell's demon is a thought experiment that appears to disprove the second law of thermodynamics . It was proposed by the physicist James Clerk Maxwell in 1867. [ 1 ] In his first letter, Maxwell referred to the entity as a "finite being" or a "being who can play a game of skill with the molecules". Lord Kelvin would later call it a " demon ". [ 2 ]
In the thought experiment, a demon controls a door between two chambers containing gas. As individual gas molecules (or atoms) approach the door, the demon quickly opens and closes the door to allow only fast-moving molecules to pass through in one direction, and only slow-moving molecules to pass through in the other. Because the kinetic temperature of a gas depends on the velocities of its constituent molecules, the demon's actions cause one chamber to warm up and the other to cool down. This would decrease the total entropy of the system , seemingly without applying any work , thereby violating the second law of thermodynamics.
The concept of Maxwell's demon has provoked substantial debate in the philosophy of science and theoretical physics , which continues to the present day. It stimulated work on the relationship between thermodynamics and information theory . Most scientists argue that, on theoretical grounds, no device can violate the second law in this way. Other researchers have implemented forms of Maxwell's demon in experiments, though they all differ from the thought experiment to some extent and none has been shown to violate the second law.
The thought experiment first appeared in a letter Maxwell wrote to Peter Guthrie Tait on 11 December 1867. It appeared again in a letter to John William Strutt in 1871, before it was presented to the public in Maxwell's 1872 book on thermodynamics titled Theory of Heat . [ 3 ]
In his letters and books, Maxwell described the agent opening the door between the chambers as a "finite being". Being a deeply religious man, he never used the word "demon". Instead, William Thomson (Lord Kelvin) was the first to use it for Maxwell's concept, in the journal Nature in 1874, and implied that he intended the Greek mythology interpretation of a daemon , a supernatural being working in the background, rather than a malevolent being. [ 2 ] [ 4 ] [ 5 ]
The second law of thermodynamics ensures (through statistical probability) that two bodies of different temperature , when brought into contact with each other and isolated from the rest of the Universe, will evolve to a thermodynamic equilibrium in which both bodies have approximately the same temperature. [ 6 ] The second law is also expressed as the assertion that in an isolated system , entropy never decreases. [ 6 ]
Maxwell conceived a thought experiment as a way of furthering the understanding of the second law. His description of the experiment is as follows: [ 6 ] [ 7 ]
... if we conceive of a being whose faculties are so sharpened that he can follow every molecule in its course, such a being, whose attributes are as essentially finite as our own, would be able to do what is impossible to us. For we have seen that molecules in a vessel full of air at uniform temperature are moving with velocities by no means uniform, though the mean velocity of any great number of them, arbitrarily selected, is almost exactly uniform. Now let us suppose that such a vessel is divided into two portions, A and B , by a division in which there is a small hole, and that a being, who can see the individual molecules, opens and closes this hole, so as to allow only the swifter molecules to pass from A to B , and only the slower molecules to pass from B to A . He will thus, without expenditure of work, raise the temperature of B and lower that of A , in contradiction to the second law of thermodynamics.
In other words, Maxwell imagines one container divided into two parts, A and B . [ 6 ] [ 8 ] Both parts are filled with the same gas at equal temperatures and placed next to each other. Observing the molecules on both sides, an imaginary demon guards a trapdoor between the two parts. When a faster-than-average molecule from A flies towards the trapdoor, the demon opens it, and the molecule will fly from A to B . Likewise, when a slower-than-average molecule from B flies towards the trapdoor, the demon will let it pass from B to A . The average speed of the molecules in B will have increased while in A they will have slowed down on average. Since average molecular speed corresponds to temperature, the temperature decreases in A and increases in B , contrary to the second law of thermodynamics. A heat engine operating between the thermal reservoirs A and B could extract useful work from this temperature difference.
The demon must allow molecules to pass in both directions in order to produce only a temperature difference; one-way passage only of faster-than-average molecules from A to B will cause higher temperature and pressure to develop on the B side.
Several physicists have presented calculations that show that the second law of thermodynamics will not actually be violated, if a more complete analysis is made of the whole system including the demon. [ 6 ] [ 8 ] [ 9 ] The essence of the physical argument is to show, by calculation, that any demon must "generate" more entropy segregating the molecules than it could ever eliminate by the method described. That is, it would take more thermodynamic work to gauge the speed of the molecules and selectively allow them to pass through the opening between A and B than the amount of energy gained by the difference of temperature caused by doing so.
One of the most famous responses to this question was suggested in 1929 by Leó Szilárd , [ 10 ] and later by Léon Brillouin . [ 6 ] [ 8 ] Szilárd pointed out that a real-life Maxwell's demon would need to have some means of measuring molecular speed, and that the act of acquiring information would require an expenditure of energy. Since the demon and the gas are interacting, we must consider the total entropy of the gas and the demon combined. The expenditure of energy by the demon will cause an increase in the entropy of the demon, which will be larger than the lowering of the entropy of the gas.
In 1960, Rolf Landauer raised an exception to this argument. [ 6 ] [ 8 ] [ 11 ] He realized that some measuring processes need not increase thermodynamic entropy as long as they were thermodynamically reversible . He suggested these "reversible" measurements could be used to sort the molecules, violating the Second Law. However, due to the connection between entropy in thermodynamics and information theory , this also meant that the recorded measurement must not be erased. In other words, to determine whether to let a molecule through, the demon must acquire information about the state of the molecule and either discard it or store it. Discarding it leads to immediate increase in entropy, but the demon cannot store it indefinitely. In 1982, Charles Bennett showed that, however well prepared, eventually the demon will run out of information storage space and must begin to erase the information it has previously gathered. [ 8 ] [ 12 ] Erasing information is a thermodynamically irreversible process that increases the entropy of a system. Although Bennett had reached the same conclusion as Szilard's 1929 paper, that a Maxwellian demon could not violate the second law because entropy would be created, he had reached it for different reasons. Regarding Landauer's principle , the minimum energy dissipated by deleting information was experimentally measured by Eric Lutz et al. in 2012. Furthermore, Lutz et al. confirmed that in order to approach the Landauer's limit, the system must asymptotically approach zero processing speed. [ 13 ] Recently, Landauer's principle has also been invoked to resolve an apparently unrelated paradox of statistical physics, Loschmidt’s paradox . [ 14 ]
John Earman and John D. Norton have argued that Szilárd and Landauer's explanations of Maxwell's demon begin by assuming that the second law of thermodynamics cannot be violated by the demon, and derive further properties of the demon from this assumption, including the necessity of consuming energy when erasing information, etc. [ 15 ] [ 16 ] It would therefore be circular to invoke these derived properties to defend the second law from the demonic argument. Bennett later acknowledged the validity of Earman and Norton's argument, while maintaining that Landauer's principle explains the mechanism by which real systems do not violate the second law of thermodynamics. [ 17 ]
Although the argument by Landauer and Bennett only answers the consistency between the second law of thermodynamics and the whole cyclic process of the entire system of a Szilard engine (a composite system of the engine and the demon), a recent approach based on the non-equilibrium thermodynamics for small fluctuating systems has provided deeper insight on each information process with each subsystem. From this viewpoint, the measurement process is regarded as a process where the correlation ( mutual information ) between the engine and the demon increases, decreasing the entropy of the system in an amount given by the mutual information. [ 18 ] If the correlation changes, thermodynamic relations such as the second law of thermodynamics and the fluctuation theorem for each subsystem should be modified, and for the case of external control a second-law like inequality [ 18 ] [ 19 ] [ 20 ] and a generalized fluctuation theorem [ 21 ] with mutual information are satisfied. For more general information processes including biological information processing, both inequality [ 22 ] and equality [ 23 ] with mutual information hold. When repeated measurements are performed, the entropy reduction of the system is given by the entropy of the sequence of measurements, [ 18 ] [ 24 ] [ 25 ] which takes into account the reduction of information due to the correlation between the measurements. More recently, Kastner has argued that the uncertainty principle forces an entropy increase when the molecule is localized to one side or the other in the Szilard engine, and that is what prevents the demon from violating the second law. [ 26 ] For the case of the original Demon who is sorting molecules by speeds, Kastner and Schlatter argue that the uncertainty principle prevents the Demon from sorting the molecules due to their delocalization upon measurement of momentum. [ 27 ]
Real-life versions of Maxwellian demons occur, but all such "real demons" or molecular demons have their entropy-lowering effects duly balanced by increase of entropy elsewhere. [ 28 ] Molecular-sized mechanisms are no longer found only in biology; they are also the subject of the emerging field of nanotechnology . Single-atom traps used by particle physicists allow an experimenter to control the state of individual quanta in a way similar to Maxwell's demon.
If hypothetical mirror matter exists, Zurab Silagadze proposes that demons can be envisaged, "which can act like perpetuum mobiles of the second kind: extract heat energy from only one reservoir, use it to do work and be isolated from the rest of ordinary world. Yet the Second Law is not violated because the demons pay their entropy cost in the hidden (mirror) sector of the world by emitting mirror photons." [ 29 ]
In 2007, David Leigh announced the creation of a nano-device based on the Brownian ratchet popularized by Richard Feynman . Leigh's device is able to drive a chemical system out of equilibrium , but it must be powered by an external source ( light in this case) and therefore does not violate thermodynamics. [ 30 ]
Previously, researchers including Nobel Prize winner Fraser Stoddart had created ring-shaped molecules called rotaxanes which could be placed on an axle connecting two sites, A and B . Particles from either site would bump into the ring and move it from end to end. If a large collection of these devices were placed in a system, half of the devices had the ring at site A and half at B , at any given moment in time. [ 31 ]
Leigh made a minor change to the axle so that if a light is shone on the device, the center of the axle will thicken, restricting the motion of the ring. It keeps the ring from moving, however, only if it is at A . Over time, therefore, the rings will be bumped from B to A and get stuck there, creating an imbalance in the system. In his experiments, Leigh was able to take a pot of "billions of these devices" from 50:50 equilibrium to a 70:30 imbalance within a few minutes. [ 32 ]
In 2009, Mark G. Raizen developed a laser atomic cooling technique which realizes the process Maxwell envisioned of sorting individual atoms in a gas into different containers based on their energy. [ 6 ] [ 33 ] [ 34 ] The new concept is a one-way wall for atoms or molecules that allows them to move in one direction, but not go back. The operation of the one-way wall relies on an irreversible atomic and molecular process of absorption of a photon at a specific wavelength, followed by spontaneous emission to a different internal state. The irreversible process is coupled to a conservative force created by magnetic fields and/or light. Raizen and collaborators proposed using the one-way wall in order to reduce the entropy of an ensemble of atoms. In parallel, Gonzalo Muga and Andreas Ruschhaupt independently developed a similar concept. Their "atom diode" was not proposed for cooling, but rather for regulating the flow of atoms. The Raizen Group demonstrated significant cooling of atoms with the one-way wall in a series of experiments in 2008. Subsequently, the operation of a one-way wall for atoms was demonstrated by Daniel Steck and collaborators later in 2008. Their experiment was based on the 2005 scheme for the one-way wall, and was not used for cooling. The cooling method realized by the Raizen Group was called "single-photon cooling", because only one photon on average is required in order to bring an atom to near-rest. This is in contrast to other laser cooling techniques which use the momentum of the photon and require a two-level cycling transition.
In 2006, Raizen, Muga, and Ruschhaupt showed in a theoretical paper that as each atom crosses the one-way wall, it scatters one photon, and information is provided about the turning point and hence the energy of that particle. The entropy increase of the radiation field scattered from a directional laser into a random direction is exactly balanced by the entropy reduction of the atoms as they are trapped by the one-way wall.
This technique is widely described as a "Maxwell's demon" because it realizes Maxwell's process of creating a temperature difference by sorting high and low energy atoms into different containers. However, scientists have pointed out that it does not violate the second law of thermodynamics , [ 6 ] [ 35 ] does not result in a net decrease in entropy, [ 6 ] [ 35 ] and cannot be used to produce useful energy. This is because the process requires more energy from the laser beams than could be produced by the temperature difference generated. The atoms absorb low entropy photons from the laser beam and emit them in a random direction, thus increasing the entropy of the environment. [ 6 ] [ 35 ]
In 2014, Pekola et al. demonstrated an experimental realization of a Szilárd engine. [ 36 ] [ 37 ] Only a year later and based on an earlier theoretical proposal, [ 38 ] the same group presented the first experimental realization of an autonomous Maxwell's demon, which extracts microscopic information from a system and reduces its entropy by applying feedback. The demon is based on two capacitively coupled single-electron devices, both integrated on the same electronic circuit. The operation of the demon is directly observed as a temperature drop in the system, with a simultaneous temperature rise in the demon arising from the thermodynamic cost of generating the mutual information. [ 39 ] In 2016, Pekola et al. demonstrated a proof-of-principle of an autonomous demon in coupled single-electron circuits, showing a way to cool critical elements in a circuit with information as a fuel. [ 40 ] Pekola et al. have also proposed that a simple qubit circuit, e.g., made of a superconducting circuit, could provide a basis to study a quantum Szilard's engine. [ 41 ]
Daemons in computing , generally processes that run on servers to respond to users, are named for Maxwell's demon. [ 42 ]
Historian Henry Brooks Adams , in his manuscript The Rule of Phase Applied to History , attempted to use Maxwell's demon as a historical metaphor , though he misunderstood and misapplied the original principle. [ 43 ] Adams interpreted history as a process moving towards "equilibrium", but he saw militaristic nations (he felt Germany pre-eminent in this class) as tending to reverse this process, a Maxwell's demon of history. Adams made many attempts to respond to the criticism of his formulation from his scientific colleagues, but the work remained incomplete at his death in 1918 and was published posthumously. [ 44 ] | https://en.wikipedia.org/wiki/Maxwell's_demon |
Maxwell's equations , or Maxwell–Heaviside equations , are a set of coupled partial differential equations that, together with the Lorentz force law, form the foundation of classical electromagnetism , classical optics , electric and magnetic circuits.
The equations provide a mathematical model for electric, optical, and radio technologies, such as power generation, electric motors, wireless communication, lenses, radar, etc. They describe how electric and magnetic fields are generated by charges , currents , and changes of the fields. [ note 1 ] The equations are named after the physicist and mathematician James Clerk Maxwell , who, in 1861 and 1862, published an early form of the equations that included the Lorentz force law. Maxwell first used the equations to propose that light is an electromagnetic phenomenon. The modern form of the equations in their most common formulation is credited to Oliver Heaviside . [ 1 ]
Maxwell's equations may be combined to demonstrate how fluctuations in electromagnetic fields (waves) propagate at a constant speed in vacuum, c ( 299 792 458 m/s [ 2 ] ). Known as electromagnetic radiation , these waves occur at various wavelengths to produce a spectrum of radiation from radio waves to gamma rays .
In partial differential equation form and a coherent system of units , Maxwell's microscopic equations can be written as (top to bottom: Gauss's law, Gauss's law for magnetism, Faraday's law, Ampère-Maxwell law) ∇ ⋅ E = ρ ε 0 ∇ ⋅ B = 0 ∇ × E = − ∂ B ∂ t ∇ × B = μ 0 ( J + ε 0 ∂ E ∂ t ) {\displaystyle {\begin{aligned}\nabla \cdot \mathbf {E} \,\,\,&={\frac {\rho }{\varepsilon _{0}}}\\\nabla \cdot \mathbf {B} \,\,\,&=0\\\nabla \times \mathbf {E} &=-{\frac {\partial \mathbf {B} }{\partial t}}\\\nabla \times \mathbf {B} &=\mu _{0}\left(\mathbf {J} +\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}\right)\end{aligned}}} With E {\displaystyle \mathbf {E} } the electric field, B {\displaystyle \mathbf {B} } the magnetic field, ρ {\displaystyle \rho } the electric charge density and J {\displaystyle \mathbf {J} } the current density . ε 0 {\displaystyle \varepsilon _{0}} is the vacuum permittivity and μ 0 {\displaystyle \mu _{0}} the vacuum permeability .
The equations have two major variants:
The term "Maxwell's equations" is often also used for equivalent alternative formulations . Versions of Maxwell's equations based on the electric and magnetic scalar potentials are preferred for explicitly solving the equations as a boundary value problem , analytical mechanics , or for use in quantum mechanics . The covariant formulation (on spacetime rather than space and time separately) makes the compatibility of Maxwell's equations with special relativity manifest . Maxwell's equations in curved spacetime , commonly used in high-energy and gravitational physics , are compatible with general relativity . [ note 2 ] In fact, Albert Einstein developed special and general relativity to accommodate the invariant speed of light, a consequence of Maxwell's equations, with the principle that only relative movement has physical consequences.
The publication of the equations marked the unification of a theory for previously separately described phenomena: magnetism, electricity, light, and associated radiation.
Since the mid-20th century, it has been understood that Maxwell's equations do not give an exact description of electromagnetic phenomena, but are instead a classical limit of the more precise theory of quantum electrodynamics .
Gauss's law describes the relationship between an electric field and electric charges : an electric field points away from positive charges and towards negative charges, and the net outflow of the electric field through a closed surface is proportional to the enclosed charge, including bound charge due to polarization of material. The coefficient of the proportion is the permittivity of free space .
Gauss's law for magnetism states that electric charges have no magnetic analogues, called magnetic monopoles ; no north or south magnetic poles exist in isolation. [ 3 ] Instead, the magnetic field of a material is attributed to a dipole , and the net outflow of the magnetic field through a closed surface is zero. Magnetic dipoles may be represented as loops of current or inseparable pairs of equal and opposite "magnetic charges". Precisely, the total magnetic flux through a Gaussian surface is zero, and the magnetic field is a solenoidal vector field . [ note 3 ]
The Maxwell–Faraday version of Faraday's law of induction describes how a time-varying magnetic field corresponds to curl of an electric field . [ 3 ] In integral form, it states that the work per unit charge required to move a charge around a closed loop equals the rate of change of the magnetic flux through the enclosed surface.
The electromagnetic induction is the operating principle behind many electric generators : for example, a rotating bar magnet creates a changing magnetic field and generates an electric field in a nearby wire.
The original law of Ampère states that magnetic fields relate to electric current . Maxwell's addition states that magnetic fields also relate to changing electric fields, which Maxwell called displacement current . The integral form states that electric and displacement currents are associated with a proportional magnetic field along any enclosing curve.
Maxwell's modification of Ampère's circuital law is important because the laws of Ampère and Gauss must otherwise be adjusted for static fields. [ 4 ] [ clarification needed ] As a consequence, it predicts that a rotating magnetic field occurs with a changing electric field. [ 3 ] [ 5 ] A further consequence is the existence of self-sustaining electromagnetic waves which travel through empty space .
The speed calculated for electromagnetic waves, which could be predicted from experiments on charges and currents, [ note 4 ] matches the speed of light ; indeed, light is one form of electromagnetic radiation (as are X-rays , radio waves , and others). Maxwell understood the connection between electromagnetic waves and light in 1861, thereby unifying the theories of electromagnetism and optics .
In the electric and magnetic field formulation there are four equations that determine the fields for given charge and current distribution. A separate law of nature , the Lorentz force law, describes how the electric and magnetic fields act on charged particles and currents. By convention, a version of this law in the original equations by Maxwell is no longer included. The vector calculus formalism below, the work of Oliver Heaviside , [ 6 ] [ 7 ] has become standard. It is rotationally invariant, and therefore mathematically more transparent than Maxwell's original 20 equations in x , y and z components. The relativistic formulations are more symmetric and Lorentz invariant. For the same equations expressed using tensor calculus or differential forms (see § Alternative formulations ).
The differential and integral formulations are mathematically equivalent; both are useful. The integral formulation relates fields within a region of space to fields on the boundary and can often be used to simplify and directly calculate fields from symmetric distributions of charges and currents. On the other hand, the differential equations are purely local and are a more natural starting point for calculating the fields in more complicated (less symmetric) situations, for example using finite element analysis . [ 8 ]
Symbols in bold represent vector quantities, and symbols in italics represent scalar quantities, unless otherwise indicated.
The equations introduce the electric field , E , a vector field , and the magnetic field , B , a pseudovector field, each generally having a time and location dependence.
The sources are
The universal constants appearing in the equations (the first two ones explicitly only in the SI formulation) are:
In the differential equations,
In the integral equations,
The equations are a little easier to interpret with time-independent surfaces and volumes. Time-independent surfaces and volumes are "fixed" and do not change over a given time interval. For example, since the surface is time-independent, we can bring the differentiation under the integral sign in Faraday's law: d d t ∬ Σ B ⋅ d S = ∬ Σ ∂ B ∂ t ⋅ d S , {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\iint _{\Sigma }\mathbf {B} \cdot \mathrm {d} \mathbf {S} =\iint _{\Sigma }{\frac {\partial \mathbf {B} }{\partial t}}\cdot \mathrm {d} \mathbf {S} \,,} Maxwell's equations can be formulated with possibly time-dependent surfaces and volumes by using the differential version and using Gauss' and Stokes' theorems as appropriate.
The definitions of charge, electric field, and magnetic field can be altered to simplify theoretical calculation, by absorbing dimensioned factors of ε 0 and μ 0 into the units (and thus redefining these). With a corresponding change in the values of the quantities for the Lorentz force law this yields the same physics, i.e. trajectories of charged particles, or work done by an electric motor. These definitions are often preferred in theoretical and high energy physics where it is natural to take the electric and magnetic field with the same units, to simplify the appearance of the electromagnetic tensor : the Lorentz covariant object unifying electric and magnetic field would then contain components with uniform unit and dimension. [ 9 ] : vii Such modified definitions are conventionally used with the Gaussian ( CGS ) units. Using these definitions, colloquially "in Gaussian units", [ 10 ] the Maxwell equations become: [ 11 ]
The equations simplify slightly when a system of quantities is chosen in the speed of light, c , is used for nondimensionalization , so that, for example, seconds and lightseconds are interchangeable, and c = 1.
Further changes are possible by absorbing factors of 4 π . This process, called rationalization, affects whether Coulomb's law or Gauss's law includes such a factor (see Heaviside–Lorentz units , used mainly in particle physics ).
The equivalence of the differential and integral formulations are a consequence of the Gauss divergence theorem and the Kelvin–Stokes theorem .
According to the (purely mathematical) Gauss divergence theorem , the electric flux through the boundary surface ∂Ω can be rewritten as
The integral version of Gauss's equation can thus be rewritten as ∭ Ω ( ∇ ⋅ E − ρ ε 0 ) d V = 0 {\displaystyle \iiint _{\Omega }\left(\nabla \cdot \mathbf {E} -{\frac {\rho }{\varepsilon _{0}}}\right)\,\mathrm {d} V=0} Since Ω is arbitrary (e.g. an arbitrary small ball with arbitrary center), this is satisfied if and only if the integrand is zero everywhere. This is
the differential equations formulation of Gauss equation up to a trivial rearrangement.
Similarly rewriting the magnetic flux in Gauss's law for magnetism in integral form gives
which is satisfied for all Ω if and only if ∇ ⋅ B = 0 {\displaystyle \nabla \cdot \mathbf {B} =0} everywhere.
By the Kelvin–Stokes theorem we can rewrite the line integrals of the fields around the closed boundary curve ∂Σ to an integral of the "circulation of the fields" (i.e. their curls ) over a surface it bounds, i.e. ∮ ∂ Σ B ⋅ d ℓ = ∬ Σ ( ∇ × B ) ⋅ d S , {\displaystyle \oint _{\partial \Sigma }\mathbf {B} \cdot \mathrm {d} {\boldsymbol {\ell }}=\iint _{\Sigma }(\nabla \times \mathbf {B} )\cdot \mathrm {d} \mathbf {S} ,} Hence the Ampère–Maxwell law , the modified version of Ampère's circuital law, in integral form can be rewritten as ∬ Σ ( ∇ × B − μ 0 ( J + ε 0 ∂ E ∂ t ) ) ⋅ d S = 0. {\displaystyle \iint _{\Sigma }\left(\nabla \times \mathbf {B} -\mu _{0}\left(\mathbf {J} +\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}\right)\right)\cdot \mathrm {d} \mathbf {S} =0.} Since Σ can be chosen arbitrarily, e.g. as an arbitrary small, arbitrary oriented, and arbitrary centered disk, we conclude that the integrand is zero if and only if the Ampère–Maxwell law in differential equations form is satisfied.
The equivalence of Faraday's law in differential and integral form follows likewise.
The line integrals and curls are analogous to quantities in classical fluid dynamics : the circulation of a fluid is the line integral of the fluid's flow velocity field around a closed loop, and the vorticity of the fluid is the curl of the velocity field.
The invariance of charge can be derived as a corollary of Maxwell's equations. The left-hand side of the Ampère–Maxwell law has zero divergence by the div–curl identity . Expanding the divergence of the right-hand side, interchanging derivatives, and applying Gauss's law gives: 0 = ∇ ⋅ ( ∇ × B ) = ∇ ⋅ ( μ 0 ( J + ε 0 ∂ E ∂ t ) ) = μ 0 ( ∇ ⋅ J + ε 0 ∂ ∂ t ∇ ⋅ E ) = μ 0 ( ∇ ⋅ J + ∂ ρ ∂ t ) {\displaystyle 0=\nabla \cdot (\nabla \times \mathbf {B} )=\nabla \cdot \left(\mu _{0}\left(\mathbf {J} +\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}\right)\right)=\mu _{0}\left(\nabla \cdot \mathbf {J} +\varepsilon _{0}{\frac {\partial }{\partial t}}\nabla \cdot \mathbf {E} \right)=\mu _{0}\left(\nabla \cdot \mathbf {J} +{\frac {\partial \rho }{\partial t}}\right)} i.e., ∂ ρ ∂ t + ∇ ⋅ J = 0. {\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot \mathbf {J} =0.} By the Gauss divergence theorem, this means the rate of change of charge in a fixed volume equals the net current flowing through the boundary:
In particular, in an isolated system the total charge is conserved.
In a region with no charges ( ρ = 0 ) and no currents ( J = 0 ), such as in vacuum, Maxwell's equations reduce to: ∇ ⋅ E = 0 , ∇ × E + ∂ B ∂ t = 0 , ∇ ⋅ B = 0 , ∇ × B − μ 0 ε 0 ∂ E ∂ t = 0. {\displaystyle {\begin{aligned}\nabla \cdot \mathbf {E} &=0,&\nabla \times \mathbf {E} +{\frac {\partial \mathbf {B} }{\partial t}}=0,\\\nabla \cdot \mathbf {B} &=0,&\nabla \times \mathbf {B} -\mu _{0}\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}=0.\end{aligned}}}
Taking the curl (∇×) of the curl equations, and using the curl of the curl identity we obtain μ 0 ε 0 ∂ 2 E ∂ t 2 − ∇ 2 E = 0 , μ 0 ε 0 ∂ 2 B ∂ t 2 − ∇ 2 B = 0. {\displaystyle {\begin{aligned}\mu _{0}\varepsilon _{0}{\frac {\partial ^{2}\mathbf {E} }{\partial t^{2}}}-\nabla ^{2}\mathbf {E} =0,\\\mu _{0}\varepsilon _{0}{\frac {\partial ^{2}\mathbf {B} }{\partial t^{2}}}-\nabla ^{2}\mathbf {B} =0.\end{aligned}}}
The quantity μ 0 ε 0 {\displaystyle \mu _{0}\varepsilon _{0}} has the dimension (T/L) 2 . Defining c = ( μ 0 ε 0 ) − 1 / 2 {\displaystyle c=(\mu _{0}\varepsilon _{0})^{-1/2}} , the equations above have the form of the standard wave equations 1 c 2 ∂ 2 E ∂ t 2 − ∇ 2 E = 0 , 1 c 2 ∂ 2 B ∂ t 2 − ∇ 2 B = 0. {\displaystyle {\begin{aligned}{\frac {1}{c^{2}}}{\frac {\partial ^{2}\mathbf {E} }{\partial t^{2}}}-\nabla ^{2}\mathbf {E} =0,\\{\frac {1}{c^{2}}}{\frac {\partial ^{2}\mathbf {B} }{\partial t^{2}}}-\nabla ^{2}\mathbf {B} =0.\end{aligned}}}
Already during Maxwell's lifetime, it was found that the known values for ε 0 {\displaystyle \varepsilon _{0}} and μ 0 {\displaystyle \mu _{0}} give c ≈ 2.998 × 10 8 m/s {\displaystyle c\approx 2.998\times 10^{8}~{\text{m/s}}} , then already known to be the speed of light in free space. This led him to propose that light and radio waves were propagating electromagnetic waves, since amply confirmed. In the old SI system of units, the values of μ 0 = 4 π × 10 − 7 {\displaystyle \mu _{0}=4\pi \times 10^{-7}} and c = 299 792 458 m/s {\displaystyle c=299\,792\,458~{\text{m/s}}} are defined constants, (which means that by definition ε 0 = 8.854 187 8... × 10 − 12 F/m {\displaystyle \varepsilon _{0}=8.854\,187\,8...\times 10^{-12}~{\text{F/m}}} ) that define the ampere and the metre. In the new SI system, only c keeps its defined value, and the electron charge gets a defined value.
In materials with relative permittivity , ε r , and relative permeability , μ r , the phase velocity of light becomes v p = 1 μ 0 μ r ε 0 ε r , {\displaystyle v_{\text{p}}={\frac {1}{\sqrt {\mu _{0}\mu _{\text{r}}\varepsilon _{0}\varepsilon _{\text{r}}}}},} which is usually [ note 5 ] less than c .
In addition, E and B are perpendicular to each other and to the direction of wave propagation, and are in phase with each other. A sinusoidal plane wave is one special solution of these equations. Maxwell's equations explain how these waves can physically propagate through space. The changing magnetic field creates a changing electric field through Faraday's law . In turn, that electric field creates a changing magnetic field through Maxwell's modification of Ampère's circuital law . This perpetual cycle allows these waves, now known as electromagnetic radiation , to move through space at velocity c .
The above equations are the microscopic version of Maxwell's equations, expressing the electric and the magnetic fields in terms of the (possibly atomic-level) charges and currents present. This is sometimes called the "general" form, but the macroscopic version below is equally general, the difference being one of bookkeeping.
The microscopic version is sometimes called "Maxwell's equations in vacuum": this refers to the fact that the material medium is not built into the structure of the equations, but appears only in the charge and current terms. The microscopic version was introduced by Lorentz, who tried to use it to derive the macroscopic properties of bulk matter from its microscopic constituents. [ 12 ] : 5
"Maxwell's macroscopic equations", also known as Maxwell's equations in matter , are more similar to those that Maxwell introduced himself.
In the macroscopic equations, the influence of bound charge Q b and bound current I b is incorporated into the displacement field D and the magnetizing field H , while the equations depend only on the free charges Q f and free currents I f . This reflects a splitting of the total electric charge Q and current I (and their densities ρ and J ) into free and bound parts: Q = Q f + Q b = ∭ Ω ( ρ f + ρ b ) d V = ∭ Ω ρ d V , I = I f + I b = ∬ Σ ( J f + J b ) ⋅ d S = ∬ Σ J ⋅ d S . {\displaystyle {\begin{aligned}Q&=Q_{\text{f}}+Q_{\text{b}}=\iiint _{\Omega }\left(\rho _{\text{f}}+\rho _{\text{b}}\right)\,\mathrm {d} V=\iiint _{\Omega }\rho \,\mathrm {d} V,\\I&=I_{\text{f}}+I_{\text{b}}=\iint _{\Sigma }\left(\mathbf {J} _{\text{f}}+\mathbf {J} _{\text{b}}\right)\cdot \mathrm {d} \mathbf {S} =\iint _{\Sigma }\mathbf {J} \cdot \mathrm {d} \mathbf {S} .\end{aligned}}}
The cost of this splitting is that the additional fields D and H need to be determined through phenomenological constituent equations relating these fields to the electric field E and the magnetic field B , together with the bound charge and current.
See below for a detailed description of the differences between the microscopic equations, dealing with total charge and current including material contributions, useful in air/vacuum; [ note 6 ] and the macroscopic equations, dealing with free charge and current, practical to use within materials.
When an electric field is applied to a dielectric material its molecules respond by forming microscopic electric dipoles – their atomic nuclei move a tiny distance in the direction of the field, while their electrons move a tiny distance in the opposite direction. This produces a macroscopic bound charge in the material even though all of the charges involved are bound to individual molecules. For example, if every molecule responds the same, similar to that shown in the figure, these tiny movements of charge combine to produce a layer of positive bound charge on one side of the material and a layer of negative charge on the other side. The bound charge is most conveniently described in terms of the polarization P of the material, its dipole moment per unit volume. If P is uniform, a macroscopic separation of charge is produced only at the surfaces where P enters and leaves the material. For non-uniform P , a charge is also produced in the bulk. [ 13 ]
Somewhat similarly, in all materials the constituent atoms exhibit magnetic moments that are intrinsically linked to the angular momentum of the components of the atoms, most notably their electrons . The connection to angular momentum suggests the picture of an assembly of microscopic current loops. Outside the material, an assembly of such microscopic current loops is not different from a macroscopic current circulating around the material's surface, despite the fact that no individual charge is traveling a large distance. These bound currents can be described using the magnetization M . [ 14 ]
The very complicated and granular bound charges and bound currents, therefore, can be represented on the macroscopic scale in terms of P and M , which average these charges and currents on a sufficiently large scale so as not to see the granularity of individual atoms, but also sufficiently small that they vary with location in the material. As such, Maxwell's macroscopic equations ignore many details on a fine scale that can be unimportant to understanding matters on a gross scale by calculating fields that are averaged over some suitable volume.
The definitions of the auxiliary fields are: D ( r , t ) = ε 0 E ( r , t ) + P ( r , t ) , H ( r , t ) = 1 μ 0 B ( r , t ) − M ( r , t ) , {\displaystyle {\begin{aligned}\mathbf {D} (\mathbf {r} ,t)&=\varepsilon _{0}\mathbf {E} (\mathbf {r} ,t)+\mathbf {P} (\mathbf {r} ,t),\\\mathbf {H} (\mathbf {r} ,t)&={\frac {1}{\mu _{0}}}\mathbf {B} (\mathbf {r} ,t)-\mathbf {M} (\mathbf {r} ,t),\end{aligned}}} where P is the polarization field and M is the magnetization field, which are defined in terms of microscopic bound charges and bound currents respectively. The macroscopic bound charge density ρ b and bound current density J b in terms of polarization P and magnetization M are then defined as ρ b = − ∇ ⋅ P , J b = ∇ × M + ∂ P ∂ t . {\displaystyle {\begin{aligned}\rho _{\text{b}}&=-\nabla \cdot \mathbf {P} ,\\\mathbf {J} _{\text{b}}&=\nabla \times \mathbf {M} +{\frac {\partial \mathbf {P} }{\partial t}}.\end{aligned}}}
If we define the total, bound, and free charge and current density by ρ = ρ b + ρ f , J = J b + J f , {\displaystyle {\begin{aligned}\rho &=\rho _{\text{b}}+\rho _{\text{f}},\\\mathbf {J} &=\mathbf {J} _{\text{b}}+\mathbf {J} _{\text{f}},\end{aligned}}} and use the defining relations above to eliminate D , and H , the "macroscopic" Maxwell's equations reproduce the "microscopic" equations.
In order to apply 'Maxwell's macroscopic equations', it is necessary to specify the relations between displacement field D and the electric field E , as well as the magnetizing field H and the magnetic field B . Equivalently, we have to specify the dependence of the polarization P (hence the bound charge) and the magnetization M (hence the bound current) on the applied electric and magnetic field. The equations specifying this response are called constitutive relations . For real-world materials, the constitutive relations are rarely simple, except approximately, and usually determined by experiment. See the main article on constitutive relations for a fuller description. [ 15 ] : 44–45
For materials without polarization and magnetization, the constitutive relations are (by definition) [ 9 ] : 2 D = ε 0 E , H = 1 μ 0 B , {\displaystyle \mathbf {D} =\varepsilon _{0}\mathbf {E} ,\quad \mathbf {H} ={\frac {1}{\mu _{0}}}\mathbf {B} ,} where ε 0 is the permittivity of free space and μ 0 the permeability of free space. Since there is no bound charge, the total and the free charge and current are equal.
An alternative viewpoint on the microscopic equations is that they are the macroscopic equations together with the statement that vacuum behaves like a perfect linear "material" without additional polarization and magnetization.
More generally, for linear materials the constitutive relations are [ 15 ] : 44–45 D = ε E , H = 1 μ B , {\displaystyle \mathbf {D} =\varepsilon \mathbf {E} ,\quad \mathbf {H} ={\frac {1}{\mu }}\mathbf {B} ,} where ε is the permittivity and μ the permeability of the material. For the displacement field D the linear approximation is usually excellent because for all but the most extreme electric fields or temperatures obtainable in the laboratory (high power pulsed lasers) the interatomic electric fields of materials of the order of 10 11 V/m are much higher than the external field. For the magnetizing field H {\displaystyle \mathbf {H} } , however, the linear approximation can break down in common materials like iron leading to phenomena like hysteresis . Even the linear case can have various complications, however.
Even more generally, in the case of non-linear materials (see for example nonlinear optics ), D and P are not necessarily proportional to E , similarly H or M is not necessarily proportional to B . In general D and H depend on both E and B , on location and time, and possibly other physical quantities.
In applications one also has to describe how the free currents and charge density behave in terms of E and B possibly coupled to other physical quantities like pressure, and the mass, number density, and velocity of charge-carrying particles. E.g., the original equations given by Maxwell (see History of Maxwell's equations ) included Ohm's law in the form J f = σ E . {\displaystyle \mathbf {J} _{\text{f}}=\sigma \mathbf {E} .}
Following are some of the several other mathematical formalisms of Maxwell's equations, with the columns separating the two homogeneous Maxwell equations from the two inhomogeneous ones. Each formulation has versions directly in terms of the electric and magnetic fields, and indirectly in terms of the electrical potential φ and the vector potential A . Potentials were introduced as a convenient way to solve the homogeneous equations, but it was thought that all observable physics was contained in the electric and magnetic fields (or relativistically, the Faraday tensor). The potentials play a central role in quantum mechanics, however, and act quantum mechanically with observable consequences even when the electric and magnetic fields vanish ( Aharonov–Bohm effect ).
Each table describes one formalism. See the main article for details of each formulation.
The direct spacetime formulations make manifest that the Maxwell equations are relativistically invariant , where space and time are treated on equal footing. Because of this symmetry, the electric and magnetic fields are treated on equal footing and are recognized as components of the Faraday tensor . This reduces the four Maxwell equations to two, which simplifies the equations, although we can no longer use the familiar vector formulation. Maxwell equations in formulation that do not treat space and time manifestly on the same footing have Lorentz invariance as a hidden symmetry. This was a major source of inspiration for the development of relativity theory. Indeed, even the formulation that treats space and time separately is not a non-relativistic approximation and describes the same physics by simply renaming variables. For this reason the relativistic invariant equations are usually called the Maxwell equations as well.
Each table below describes one formalism.
∂ α A α = 0 {\displaystyle \partial _{\alpha }A^{\alpha }=0}
∇ α A α = 0 {\displaystyle \nabla _{\alpha }A^{\alpha }=0}
d ⋆ A = 0 {\displaystyle \mathrm {d} {\star }A=0}
Other formalisms include the geometric algebra formulation and a matrix representation of Maxwell's equations . Historically, a quaternionic formulation [ 17 ] [ 18 ] was used.
Maxwell's equations are partial differential equations that relate the electric and magnetic fields to each other and to the electric charges and currents. Often, the charges and currents are themselves dependent on the electric and magnetic fields via the Lorentz force equation and the constitutive relations . These all form a set of coupled partial differential equations which are often very difficult to solve: the solutions encompass all the diverse phenomena of classical electromagnetism . Some general remarks follow.
As for any differential equation, boundary conditions [ 19 ] [ 20 ] [ 21 ] and initial conditions [ 22 ] are necessary for a unique solution . For example, even with no charges and no currents anywhere in spacetime, there are the obvious solutions for which E and B are zero or constant, but there are also non-trivial solutions corresponding to electromagnetic waves. In some cases, Maxwell's equations are solved over the whole of space, and boundary conditions are given as asymptotic limits at infinity. [ 23 ] In other cases, Maxwell's equations are solved in a finite region of space, with appropriate conditions on the boundary of that region, for example an artificial absorbing boundary representing the rest of the universe, [ 24 ] [ 25 ] or periodic boundary conditions , or walls that isolate a small region from the outside world (as with a waveguide or cavity resonator ). [ 26 ]
Jefimenko's equations (or the closely related Liénard–Wiechert potentials ) are the explicit solution to Maxwell's equations for the electric and magnetic fields created by any given distribution of charges and currents. It assumes specific initial conditions to obtain the so-called "retarded solution", where the only fields present are the ones created by the charges. However, Jefimenko's equations are unhelpful in situations when the charges and currents are themselves affected by the fields they create.
Numerical methods for differential equations can be used to compute approximate solutions of Maxwell's equations when exact solutions are impossible. These include the finite element method and finite-difference time-domain method . [ 19 ] [ 21 ] [ 27 ] [ 28 ] [ 29 ] For more details, see Computational electromagnetics .
Maxwell's equations seem overdetermined , in that they involve six unknowns (the three components of E and B ) but eight equations (one for each of the two Gauss's laws, three vector components each for Faraday's and Ampère's circuital laws). (The currents and charges are not unknowns, being freely specifiable subject to charge conservation .) This is related to a certain limited kind of redundancy in Maxwell's equations: It can be proven that any system satisfying Faraday's law and Ampère's circuital law automatically also satisfies the two Gauss's laws, as long as the system's initial condition does, and assuming conservation of charge and the nonexistence of magnetic monopoles. [ 30 ] [ 31 ] This explanation was first introduced by Julius Adams Stratton in 1941. [ 32 ]
Although it is possible to simply ignore the two Gauss's laws in a numerical algorithm (apart from the initial conditions), the imperfect precision of the calculations can lead to ever-increasing violations of those laws. By introducing dummy variables characterizing these violations, the four equations become not overdetermined after all. The resulting formulation can lead to more accurate algorithms that take all four laws into account. [ 33 ]
Both identities ∇ ⋅ ∇ × B ≡ 0 , ∇ ⋅ ∇ × E ≡ 0 {\displaystyle \nabla \cdot \nabla \times \mathbf {B} \equiv 0,\nabla \cdot \nabla \times \mathbf {E} \equiv 0} , which reduce eight equations to six independent ones, are the true reason of overdetermination. [ 34 ] [ 35 ]
Equivalently, the overdetermination can be viewed as implying conservation of electric and magnetic charge, as they are required in the derivation described above but implied by the two Gauss's laws.
For linear algebraic equations, one can make 'nice' rules to rewrite the equations and unknowns. The equations can be linearly dependent. But in differential equations, and especially partial differential equations (PDEs), one needs appropriate boundary conditions, which depend in not so obvious ways on the equations. Even more, if one rewrites them in terms of vector and scalar potential, then the equations are underdetermined because of gauge fixing .
Maxwell's equations and the Lorentz force law (along with the rest of classical electromagnetism) are extraordinarily successful at explaining and predicting a variety of phenomena. However, they do not account for quantum effects, and so their domain of applicability is limited. Maxwell's equations are thought of as the classical limit of quantum electrodynamics (QED).
Some observed electromagnetic phenomena cannot be explained with Maxwell's equations if the source of the electromagnetic fields are the classical distributions of charge and current. These include photon–photon scattering and many other phenomena related to photons or virtual photons , " nonclassical light " and quantum entanglement of electromagnetic fields (see Quantum optics ). E.g. quantum cryptography cannot be described by Maxwell theory, not even approximately. The approximate nature of Maxwell's equations becomes more and more apparent when going into the extremely strong field regime (see Euler–Heisenberg Lagrangian ) or to extremely small distances.
Finally, Maxwell's equations cannot explain any phenomenon involving individual photons interacting with quantum matter, such as the photoelectric effect , Planck's law , the Duane–Hunt law , and single-photon light detectors . However, many such phenomena may be explained using a halfway theory of quantum matter coupled to a classical electromagnetic field, either as external field or with the expected value of the charge current and density on the right hand side of Maxwell's equations. This is known as semiclassical theory or self-field QED and was initially discovered by de Broglie and Schrödinger and later fully developed by E.T. Jaynes and A.O. Barut.
Popular variations on the Maxwell equations as a classical theory of electromagnetic fields are relatively scarce because the standard equations have stood the test of time remarkably well.
Maxwell's equations posit that there is electric charge , but no magnetic charge (also called magnetic monopoles ), in the universe. Indeed, magnetic charge has never been observed, despite extensive searches, [ note 7 ] and may not exist. If they did exist, both Gauss's law for magnetism and Faraday's law would need to be modified, and the resulting four equations would be fully symmetric under the interchange of electric and magnetic fields. [ 9 ] : 273–275 | https://en.wikipedia.org/wiki/Maxwell's_equations |
In probability theory , Maxwell's theorem (known also as Herschel-Maxwell's theorem and Herschel-Maxwell's derivation ) states that if the probability distribution of a random vector in R n {\displaystyle \mathbb {R} ^{n}} is unchanged by rotations, and if the components are independent, then the components are identically distributed and normally distributed.
If the probability distribution of a vector -valued random variable X = ( X 1 , ..., X n ) T is the same as the distribution of GX for every n × n orthogonal matrix G and the components are independent , then the components X 1 , ..., X n are normally distributed with expected value 0 and all have the same variance . This theorem is one of many characterizations of the normal distribution.
The only rotationally invariant probability distributions on R n that have independent components are multivariate normal distributions with expected value 0 and variance σ 2 I n , (where I n = the n × n identity matrix), for some positive number σ 2 .
John Herschel proved the theorem in 1850 . [ 1 ] Ten years later, James Clerk Maxwell proved the theorem in Proposition IV of his 1860 paper. [ 2 ] [ 3 ]
We only need to prove the theorem for the 2-dimensional case, since we can then generalize it to n-dimensions by applying the theorem sequentially to each pair of coordinates.
Since rotating by 90 degrees preserves the joint distribution, X 1 {\displaystyle X_{1}} and X 2 {\displaystyle X_{2}} have the same probability measure: let it be μ {\displaystyle \mu } . If μ {\displaystyle \mu } is a Dirac delta distribution at zero, then it is in particular a degenerate gaussian distribution. Let us now assume that it is not a Dirac delta distribution at zero.
By the Lebesgue's decomposition theorem , we decompose μ {\displaystyle \mu } to a sum of regular measure and an atomic measure: μ = μ r + μ s {\displaystyle \mu =\mu _{r}+\mu _{s}} . We need to show that μ s = 0 {\displaystyle \mu _{s}=0} ; we proceed by contradiction. Suppose μ s {\displaystyle \mu _{s}} contains an atomic part, then there exists some x ∈ R {\displaystyle x\in \mathbb {R} } such that μ s ( { x } ) > 0 {\displaystyle \mu _{s}(\{x\})>0} . By independence of X 1 , X 2 {\displaystyle X_{1},X_{2}} , the conditional variable X 2 | { X 1 = x } {\displaystyle X_{2}|\{X_{1}=x\}} is distributed the same way as X 2 {\displaystyle X_{2}} . Suppose x = 0 {\displaystyle x=0} , then since we assumed μ {\displaystyle \mu } is not concentrated at zero, P r ( X 2 ≠ 0 ) > 0 {\displaystyle Pr(X_{2}\neq 0)>0} , and so the double ray { ( x 1 , x 2 ) : x 1 = 0 , x 2 ≠ 0 } {\displaystyle \{(x_{1},x_{2}):x_{1}=0,x_{2}\neq 0\}} has nonzero probability. Now, by rotational symmetry of μ × μ {\displaystyle \mu \times \mu } , any rotation of the double ray also has the same nonzero probability, and since any two rotations are disjoint, their union has infinite probability; thus arriving at a contradiction.
Let μ {\displaystyle \mu } have probability density function ρ {\displaystyle \rho } ; the problem reduces to solving the functional equation
ρ ( x ) ρ ( y ) = ρ ( x cos θ + y sin θ ) ρ ( x sin θ − y cos θ ) . {\displaystyle \rho (x)\rho (y)=\rho (x\cos \theta +y\sin \theta )\rho (x\sin \theta -y\cos \theta ).} | https://en.wikipedia.org/wiki/Maxwell's_theorem |
Maxwell’s thermodynamic surface is an 1874 sculpture [ 1 ] made by Scottish physicist James Clerk Maxwell (1831–1879). This model provides a three-dimensional space of the various states of a fictitious substance with water-like properties. [ 2 ] This plot has coordinates volume (x), entropy (y), and energy (z). It was based on the American scientist Josiah Willard Gibbs ’ graphical thermodynamics papers of 1873. [ 3 ] [ 4 ] In Maxwell's words, the model allowed "the principal features of known substances [to] be represented on a convenient scale." [ 5 ]
Gibbs' papers defined what Gibbs called the "thermodynamic surface," which expressed the relationship between the volume, entropy, and energy of a substance at different temperatures and pressures. However, Gibbs did not include any diagrams of this surface. [ 3 ] [ 6 ] After receiving reprints of Gibbs' papers, Maxwell recognized the insight afforded by Gibbs' new point of view and set about constructing physical three-dimensional models of the surface. [ 7 ] This reflected Maxwell's talent as a strong visual thinker [ 8 ] and prefigured modern scientific visualization techniques. [ 3 ]
Maxwell sculpted the original model in clay and made several plaster casts of the clay model, sending one to Gibbs as a gift, keeping two in his laboratory at Cambridge University . [ 3 ] Maxwell's copy is on display at the Cavendish Laboratory of Cambridge University, [ 3 ] [ 9 ] while Gibbs' copy is on display at the Sloane Physics Laboratory of Yale University , [ 10 ] where Gibbs held a professorship. Two copies reside at the National Museum of Scotland , one via Peter Tait and the other via George Chrystal . [ 11 ] [ 12 ] [ 13 ] Another was sent to Thomas Andrews . [ 13 ] A number of historic photographs were taken of these plaster casts during the middle of the twentieth century – including one by James Pickands II, published in 1942 [ 14 ] – and these photographs exposed a wider range of people to Maxwell's visualization approach.
As explained by Gibbs and appreciated by Maxwell, the advantage of a U-V-S (energy-volume-entropy) surface over the usual P-V-T ( pressure-volume-temperature ) surface was that it allowed to geometrically explain sharp, discontinuous phase transitions as emerging from a purely continuous and smooth state function U ( V , S ) {\displaystyle U(V,S)} ; Maxwell's surface demonstrated the generic behaviour for a substance that can exist in solid, liquid, and gaseous phases. The basic geometrical operation involved simply placing a tangent plane (such as a flat sheet of glass) on the surface and rolling it around, observing where it touches the surface. Using this operation, it was possible to explain phase coexistence, the triple point , to identify the boundary between absolutely stable and metastable phases (e.g., superheating and supercooling ), the spinodal boundary between metastable and unstable phases, and to illustrate the critical point . [ 15 ]
Maxwell drew lines of equal pressure (isopiestics) and of equal temperature (isothermals) on his plaster cast by placing it in the sunlight, and "tracing the curve when the rays just grazed the surface." [ 2 ] He sent sketches of these lines to a number of colleagues. [ 16 ] For example, his letter to Thomas Andrews of 15 July 1875 included sketches of these lines. [ 2 ] Maxwell provided a more detailed explanation and a clearer drawing of the lines (pictured) in the revised version of his book Theory of Heat , [ 15 ] and a version of this drawing appeared on a 2005 US postage stamp in honour of Gibbs. [ 6 ]
As well as being on display in two countries, Maxwell's model lives on in the literature of thermodynamics, and books on the subject often mention it, [ 17 ] though not always with complete historical accuracy. For example, the thermodynamic surface represented by the sculpture is often reported to be that of water, [ 17 ] contrary to Maxwell's own statement. [ 2 ]
Maxwell's model was not the first plaster model of a thermodynamic surface: in 1871, even before Gibbs' papers, James Thomson had constructed a plaster pressure - volume - temperature plot, based on data for carbon dioxide collected by Thomas Andrews . [ 18 ]
Around 1900, the Dutch scientist Heike Kamerlingh Onnes , together with his student Johannes Petrus Kuenen and his assistant Zaalberg van Zelst, continued Maxwell's work by constructing their own plaster thermodynamic surface models. [ 19 ] These models were based on accurate experimental data obtained in their laboratory, and were accompanied by specialised tools for drawing the lines of equal pressure. [ 19 ] | https://en.wikipedia.org/wiki/Maxwell's_thermodynamic_surface |
A Maxwell bridge is a modification to a Wheatstone bridge used to measure an unknown inductance (usually of low Q value) in terms of calibrated resistance and inductance or resistance and capacitance . [ 1 ] When the calibrated components are a parallel resistor and capacitor, the bridge is known as a Maxwell bridge. It is named for James C. Maxwell , who first described it in 1873.
It uses the principle that the positive phase angle of an inductive impedance can be compensated by the negative phase angle of a capacitive impedance when put in the opposite arm and the circuit is at resonance; i.e., no potential difference across the detector (an AC voltmeter or ammeter )) and hence no current flowing through it. The unknown inductance then becomes known in terms of this capacitance.
With reference to the picture, in a typical application R 1 {\displaystyle R_{1}} and R 4 {\displaystyle R_{4}} are known fixed entities, and R 2 {\displaystyle R_{2}} and C 2 {\displaystyle C_{2}} are known variable entities. R 2 {\displaystyle R_{2}} and C 2 {\displaystyle C_{2}} are adjusted until the bridge is balanced.
R 3 {\displaystyle R_{3}} and L 3 {\displaystyle L_{3}} can then be calculated based on the values of the other components:
To avoid the difficulties associated with determining the precise value of a variable capacitance, sometimes a fixed-value capacitor will be installed and more than one resistor will be made variable. It cannot be used for the measurement of high Q values . It is also unsuited for the coils with low Q values, less than one, because of balance convergence problem. Its use is limited to the measurement of low Q values from 1 to 10.
The frequency of the AC current used to assess the unknown inductor should match the frequency of the circuit the inductor will be used in - the impedance
and therefore the assigned inductance of the component varies with frequency. For ideal inductors, this relationship is linear, so that the inductance value
at an arbitrary frequency can be calculated from the inductance value measured at some reference frequency. Unfortunately, for real components, this
relationship is not linear, and using a derived or calculated value in place of a measured one can lead to serious inaccuracies.
A practical issue in construction of the bridge is mutual inductance: two inductors in propinquity will give rise to mutual induction : when the magnetic
field of one intersects the coil of the other, it will reinforce the magnetic field in that other coil, and vice versa, distorting the inductance of both
coils. To minimize mutual inductance, orient the inductors with their axes perpendicular to each other, and separate them as far as is practical. Similarly,
the nearby presence of electric motors, chokes and transformers (like that in the power supply for the bridge!) may induce mutual inductance in the circuit components, so locate the circuit remotely from any of these.
The frequency dependence of inductance values gives rise to other constraints on this type of bridge: the calibration frequency must be well below the
lesser of the self-resonance frequency of the inductor and the self-resonance frequency of the capacitor, Fr < min(L srf ,C srf )/10. Before those limits are approached, the ESR of the capacitor will likely have significant effect, and have to be explicitly modeled.
For ferromagnetic core inductors, there are additional constraints. There is a minimum magnetization current required to magnetize the core of an inductor,
so the current in the inductor branches of the circuit must exceed the minimum, but must not be so great as to saturate the core of either inductor.
The additional complexity of using a Maxwell-Wien bridge over simpler bridge types [ ambiguous ] is warranted in circumstances where either the mutual inductance between the load and the known bridge entities, or stray electromagnetic interference, distorts the measurement results. The capacitive reactance in the bridge will exactly oppose the inductive reactance of the load when the bridge is balanced, allowing the load's resistance and reactance to be reliably determined. | https://en.wikipedia.org/wiki/Maxwell_bridge |
In statistical physics and thermodynamics , the Maxwell construction is a method for addressing the physically unrealistic aspects of certain models of phase transitions . Named for physicist James Clerk Maxwell , it considers areas of regions on phase diagrams .
In thermodynamic equilibrium , a necessary condition for stability is that pressure, p {\displaystyle p} , does not increase with volume, or molar volume, v = V / N {\displaystyle v=V/N} ; this is expressed mathematically as ∂ v p | T < 0 {\displaystyle \partial _{v}p|_{T}<0} , where T {\displaystyle T} is the temperature. [ 1 ] This basic stability requirement, and similar ones for other conjugate pairs of variables , is violated in analytic models of first order phase transitions . The most famous case is the van der Waals equation , [ 2 ] [ 3 ] p = R T / ( v − b ) − a / v 2 {\displaystyle p=RT/(v-b)-a/v^{2}} where a , b , R {\displaystyle a,b,R} are dimensional constants. This violation is not a defect, rather it is the origin of the observed discontinuity in properties that distinguish liquid from vapor, and defines a first order phase transition.
Figure 1 shows an isotherm drawn, for v > b {\displaystyle v>b} , as a continuously differentiable solid black, dotted black, and dashed gray curve. The decreasing part of the curve to the right of point C in Fig. 1 describes a gas, while the decreasing part to the left of point E describes a liquid. These two parts are separated by a region between the local minimum and local maximum on the curve with positive slope that violates the stability criterion. This mathematical criterion expresses a physical condition which Epstein [ 4 ] described as follows: "It is obvious that this middle part, dotted in our curves [dashed in Fig.1 here], can have no physical reality. In fact, let us imagine the fluid in a state corresponding to this part of the curve contained in a heat conducting vertical cylinder whose top is formed by a piston. The piston can slide up and down in the cylinder, and we put on it a load exactly balancing the pressure of the gas. If we take a little weight off the piston, there will no longer be equilibrium and it will begin to move upward. However, as it moves the volume of the gas increases and with it its pressure. The resultant force on the piston gets larger, retaining its upward direction. The piston will, therefore, continue to move and the gas to expand until it reaches the state represented by the maximum of the isotherm. Vice versa, if we add ever so little to the load of the balanced piston, the gas will collapse to the state corresponding to the minimum of the isotherm."
This situation is similar to a body exactly balanced at the top of a smooth surface that, with the slightest disturbance will depart from its equilibrium position and continue until it reaches a local minimum. As they are described such states are dynamically unstable, and consequently they are not observed. The gap v m i n ≤ v ≤ v m a x {\displaystyle v_{\rm {min}}\leq v\leq v_{\rm {max}}} is a precursor of the actual phase change from liquid to vapor. The points E ( p m i n , v m i n ) {\displaystyle (p_{\rm {min}},v_{\rm {min}})} and C ( p m a x , v m a x ) {\displaystyle (p_{\rm {max}},v_{\rm {max}})} , where ∂ v p | T = 0 {\displaystyle \partial _{v}p|_{T}=0} , that delimit the largest possible liquid and smallest possible vapor states are called spinodal points. Their locus forms a spinodal curve which bounds a region where no homogeneous stable states can exist.
Experiments show that if the volume of a vessel containing a fixed amount of liquid is heated and expands at constant temperature, at a certain pressure, p s ( T ) {\displaystyle p_{s}(T)} , vapor, (denoted by dots at points f {\displaystyle f} and g {\displaystyle g} in Fig. 1) bubbles nucleate so the fluid is no longer homogeneous, but rather it has become a heterogeneous mixture of boiling liquid and condensing vapor. Gravity separates the boiling (saturated) liquid, v f = V f / N f {\displaystyle v_{f}=V_{f}/N_{f}} , from the less dense condensing (saturated) vapor, v g = V g / N g > v f , {\displaystyle v_{\text{g}}=V_{\text{g}}/N_{\text{g}}>v_{f},} that coexist at the same saturation temperature and pressure. As the heating continues the amount of vapor, N g {\displaystyle N_{\text{g}}} increases and that of the liquid, N f = N − N g {\displaystyle N_{f}=N-N_{\text{g}}} , decreases. All the while the pressure, p s {\displaystyle p_{s}} and temperature, T {\displaystyle T} , remain constant and the volume V = V f + V g {\displaystyle V=V_{f}+V_{\text{g}}} increases. In this situation the molar volume of the mixture is a weighted average of its components v = V / N = ( V f / N f ) ( N − N g ) / N + ( V g / N g ) ( N g / N ) = v f ( 1 − x ) + v g x {\displaystyle v=V/N=(V_{f}/N_{f})(N-N_{\text{g}})/N+(V_{\text{g}}/N_{\text{g}})(N_{\text{g}}/N)=v_{f}(1-x)+v_{\text{g}}x} where x = N g / N {\displaystyle x=N_{\text{g}}/N} , the mole fraction of the vapor, 0 ≤ x ≤ 1 {\displaystyle 0\leq x\leq 1} , increases continuously; however, the molar volume of the substance itself has only the largest possible stable value for its liquid state, and smallest possible stable value for its vapor state at the given p ( T ) {\displaystyle p(T)} . To repeat, although the mixture molar volume passes continuously from v f {\displaystyle v_{f}} to v g {\displaystyle v_{\text{g}}} (denoted by the dashed line in Fig. 1), the underlying fluid has a discontinuity in this property, and others as well. This equation of state of the mixture is called the lever rule . [ 5 ] [ 6 ] [ 7 ]
The dotted parts of the curve in Fig. 1 are metastable states . For many years such states were an academic curiosity; Callen [ 8 ] gave as an example, "water that has been cooled below 0°C at a pressure of 1 atm. A tap on a beaker of water in this condition precipitates a sudden dramatic crystallization of the system." However, studies of boiling heat transfer have made clear that metastable states occur routinely as an integral part of this process. In it the heating surface temperature is higher than the saturation temperature, often significantly so, hence the adjacent liquid must be superheated. [ 9 ] Further the advent of devices that operate with very high heat fluxes has created interest in the metastable states, and the thermodynamic properties associated with them, in particular the superheated liquid states. [ 10 ] Moreover, the fact that they are predicted by the van der Waals equation, and cubic equations in general, is compelling evidence of its efficacy in describing phase transitions; Sommerfeld described this as follows: [ 11 ]
It is very remarkable that the theory due to van der Waals is in a position to predict, at least qualitatively, the existence of the unstable [called metastable here] states along the branches AA ′ or BB ′ [BC and FE in Fig. 1 here].
The discontinuity in v {\displaystyle v} , and other properties, e.g. internal energy, u {\displaystyle u} , and entropy, s {\displaystyle s} , of the substance, is called a first order phase transition. [ 12 ] [ 13 ] In order to specify the unique experimentally observed pressure, p s ( T ) {\displaystyle p_{s}(T)} , at which it occurs another thermodynamic condition is required, for from Fig.1 it could clearly occur for any pressure in the range p m i n ≤ p ≤ p m a x {\displaystyle p_{\rm {min}}\leq p\leq p_{\rm {max}}} . Such a condition was first enunciated in a clever thermodynamic argument by Maxwell at a lecture he delivered to the British Chemical Society on Feb 18, 1875 [ 14 ] (Fig.1, including the letters B C D E F, is the curve he described):
The portion of the curve from C to E represents points which are essentially unstable, and which cannot therefore be realized.
Now let us suppose the medium to pass from B to F along the hypothetical curve B C D E F in a state always homogeneous, and to return along the straight line path F B in the form of a mixture of liquid and vapor. Since the temperature has been constant throughout, no heat can have been transformed into work. Now the heat transformed into work is represented by the excess of the area F D E over B C D . Hence the condition which determines the maximum pressure of the vapor at given temperature is that the line B F cuts off equal areas from the curve above and below.
On a temperature—molar entropy plane, the area under any curve is the heat transfer to the substance per mole, positive going from left to right and negative from right to left; moreover, in a cyclic process the net heat transfer to the substance is the area enclosed by the cycle's closed curve. [ 15 ] [ 16 ] Since the cycle Maxwell considered is composed of the two gray dashed isothermals at the same temperature, one proceeding from B to F (through C D and E), and the other directly back from F to B, the two lines are identical, just traversed in reverse; there is zero area enclosed, and hence q = 0 {\displaystyle q=0} .
Furthermore, the area under these curves when plotted on a pressure—molar volume, see Fig. 1, are the work done by the substance, positive going from left to right, and negative from right to left. Likewise the net work done in a cycle is the area enclosed by the closed curve. Since the first law of thermodynamics yields in the special case of a cycle w = q {\displaystyle w=q} , for the cycle envisioned by Maxwell w = q = 0 {\displaystyle w=q=0} ; then since the area enclosed is I + II = 0, see Fig.1, with I positive and II negative, the transition pressure must be such that the two areas are equal.
Written as a mathematical equation in terms of the work done in each process this is ∫ v g v f p d v + ∫ v f v g p s d v = − ∫ v f v g p d v + p s ( v g − v f ) = 0 for T = constant {\displaystyle \int _{v_{\text{g}}}^{v_{f}}\,p\,dv+\int _{v_{f}}^{v_{\text{g}}}\,p_{s}\,dv=-\int _{v_{f}}^{v_{\text{g}}}\,p\,dv+p_{s}(v_{\text{g}}-v_{f})=0\quad {\mbox{for}}\quad T={\mbox{constant}}}
This equation together with the equation of state written for each of the states f {\displaystyle f} and g {\displaystyle g} p s = p ( v f , T ) p s = p ( v g , T ) {\displaystyle p_{s}=p(v_{f},T)\qquad p_{s}=p(v_{\text{g}},T)} are three equations for the four variables, p s , T , v f , v g {\displaystyle p_{s},T,v_{f},v_{\text{g}}} , so given any one of them, say T {\displaystyle T} , the other three are determined. In other words, there is a unique value of p s ( T ) {\displaystyle p_{s}(T)} , as well as v f ( T ) {\displaystyle v_{f}(T)} and v g ( T ) {\displaystyle v_{\text{g}}(T)} , at which the phase transition can occur.
At the end of his lecture, after complimenting van der Waals by referring to his work as "an exceedingly ingenious thesis", Maxwell finished it by saying:
I must not, however, omit to mention a most important American contribution to this part of thermodynamics by Prof. Willard Gibbs of Yale College U.S., who has given us a remarkably simple and thoroughly satisfactory method of representing the relations of the different states of matter by means of a model. By means of this model, problems which had long resisted the efforts of myself and others may be solved at once.
This remark proved prescient because in 1876-1878 Gibbs published his definitive work on thermodynamics [ 17 ] in which he showed that thermodynamic equilibrium of a heterogeneous substance requires that, in addition to mechanical equilibrium (the same pressure for each component) and thermal equilibrium (the same temperature for each component), there must also be material equilibrium (the same chemical potential for each component). In the present instance of one substance and two phases in addition to p f = p g = p s {\displaystyle p_{f}=p_{\text{g}}=p_{s}} and T f = T g = T {\displaystyle T_{f}=T_{\text{g}}=T} , material equilibrium requires g f = g g {\displaystyle g_{f}=g_{\text{g}}} (for the special case of one substance its chemical potential is the molar Gibbs function, μ = g ≡ G / N {\displaystyle \mu =g\equiv G/N} where g = u + p v − T s {\displaystyle g=u+pv-Ts} ). [ 18 ] This condition can be deduced by a simple physical argument as follows: the energy required to vaporize a mole is from the second law at constant temperature q v a p = T ( s g − s f ) {\displaystyle q_{\rm {vap}}=T(s_{\text{g}}-s_{f})} , and from the first law at constant pressure q v a p = h g − h f {\displaystyle q_{\rm {vap}}=h_{\text{g}}-h_{f}} , then equating these two and rearranging produces the result since h = u + p v {\displaystyle h=u+pv} . The conditions of material equilibrium lead to the famous Gibbs phase rule , D = n − r + 2 {\displaystyle D=n-r+2} , where n {\displaystyle n} is the number of substances, r {\displaystyle r} the number of phases, and D {\displaystyle D} the number of independent intensive variables required to specify the state. [ 19 ] [ 20 ] In the case of one substance and two phases discussed here this gives, D = 1 {\displaystyle D=1} , the experimentally observed number.
Now g ( p , T ) {\displaystyle g(p,T)} is a thermodynamic potential function , its differential is [ 21 ] d g = ∂ p g | T d p + ∂ T g | p d T = v d p − s d T {\displaystyle dg=\partial _{p}g|_{T}\,dp+\partial _{T}g|_{p}\,dT=v\,dp-s\,dT}
Integrating this at constant temperature produces g ( p , T ) = g A ( T ) + ∫ p A p v ( p ¯ , T ) d p ¯ {\displaystyle g(p,T)=g_{A}(T)+\int _{p_{A}}^{p}\,v({\bar {p}},T)\,d{\bar {p}}} here g A {\displaystyle g_{A}} is a constant of integration, but the constant is different for each isotherm hence it is written as a function of T {\displaystyle T} . [ 22 ] In order to evaluate g {\displaystyle g} one must invert p = p ( v , T ) {\displaystyle p=p(v,T)} to obtain v = v ( p , T ) {\displaystyle v=v(p,T)} . However, it is the nature of the phase transition phenomenon that this inversion is not unique; for example the van der Waals quation written for v {\displaystyle v} is, p v 3 − ( p b + R T ) v 2 + a v − a b = 0 , {\displaystyle pv^{3}-(pb+RT)v^{2}+av-ab=0,}
a cubic with either 1 or, in this case, 3 real roots. Thus there are three curves, as seen in Fig. 2, consisting of stable (shown solid black), metastable (shown dotted black), and unstable (shown dashed gray) states.
Actually, the figure was not produced by solving the cubic and integrating, rather g ( v , T ) {\displaystyle g(v,T)} was obtained from its definition by first obtaining u ( v , T ) {\displaystyle u(v,T)} and s ( v , T ) {\displaystyle s(v,T)} , which is easily done analytically for the van der Waals equation, and plotting it parametrically with p ( v , T ) {\displaystyle p(v,T)} , using v {\displaystyle v} as the parameter. Considering only its stable states g ( p , T ) {\displaystyle g(p,T)} is continuous with discontinuous partial derivatives , ∂ p g | T = v {\displaystyle \partial _{p}g|_{T}=v} and ∂ T g | p = − s {\displaystyle \partial _{T}g|_{p}=-s} , at the phase transition point. In the Ehrenfest classification , a first order phase transition refers to the discontinuity of the first partial derivatives of g {\displaystyle g} while a second order phase transition would involve discontinuities of the second partial derivatives. [ 23 ]
Evaluating the integral expression for g ( p , T ) {\displaystyle g(p,T)} given previously between the saturated liquid and vapor states and applying the Gibbs criterion of material equilibrium to this phase change process requires writing it as g g − g f = ∫ p s p m i n v l d p + ∫ p m i n p m a x v u d p + ∫ p m a x p s v v d p = 0 {\displaystyle g_{\text{g}}-g_{f}=\int _{p_{s}}^{p_{\rm {min}}}\,v_{l}\,dp+\int _{p_{\rm {min}}}^{p_{\rm {max}}}\,v_{u}\,dp+\int _{p_{\rm {max}}}^{p_{s}}\,v_{v}\,dp=0}
Here the integral has been split into three parts using the three real roots of the cubic corresponding to the liquid, v l {\displaystyle v_{l}} , unstable, v u {\displaystyle v_{u}} , and vapor, v v {\displaystyle v_{v}} , states respectively. These integrals can best be visualized by viewing Fig. 1 rotated 90 ∘ {\displaystyle 90^{\circ }} counterclockwise in the paper plane then 180 ∘ {\displaystyle 180^{\circ }} about the v {\displaystyle v} axis so that v {\displaystyle v} appears on the leftside ordinate of the curve as shown in the accompanying graph. In this view the function v ( p , T ) {\displaystyle v(p,T)} clearly is multi-valued; this is the reason it requires three real functions to describe its behavior between p m i n {\displaystyle p_{\rm {min}}} and p m a x {\displaystyle p_{\rm {max}}} . Now on splitting the middle integral into two g g − g f = ∫ p s p m i n v l d p + ∫ p m i n p s v u d p + ∫ p s p m a x v u d p + ∫ p m a x p s v v d p = 0 {\displaystyle g_{\text{g}}-g_{f}=\int _{p_{s}}^{p_{\rm {min}}}\,v_{l}\,dp+\int _{p_{\rm {min}}}^{p_{s}}\,v_{u}\,dp+\int _{p_{s}}^{p_{\rm {max}}}\,v_{u}\,dp+\int _{p_{\rm {max}}}^{p_{s}}\,v_{v}\,dp=0}
The first two integrals here are area I while the second two are the negative of area II. The two areas add to zero hence their magnitudes are equal according to this Gibbs criterion. This is again the equal area rule of Maxwell, the Maxwell construction, and it can also be shown analytically. Since d ( p v ) = p d v + v d p {\displaystyle d(pv)=pdv+vdp} , d g = v d p − s d T = − p d v + d ( p v ) − s d T . {\displaystyle dg=v\,dp-s\,dT=-p\,dv+d(pv)-s\,dT.}
Integrating this for constant temperature from state f {\displaystyle f} to g {\displaystyle g} with the Gibbs condition produces g g − g f = − ∫ v f v g p ( v , T ) d v + p s ( v g − v f ) = 0 {\displaystyle g_{\text{g}}-g_{f}=-\int _{v_{f}}^{v_{\text{g}}}\,p(v,T)\,dv+p_{s}(v_{\text{g}}-v_{f})=0} which is Maxwell's result. This equal area rule can also be derived by making use of the Helmholtz free energy. [ 24 ] In any event the Maxwell construction derives from the Gibbs condition of material equilibrium. However, even though g f = g g {\displaystyle g_{f}=g_{\text{g}}} is more fundamental it is more abstract than the equal area rule, which is understood geometrically.
Another method to determine the coexistence points is based on the Helmholtz potential minimum principle, which states that in a system in diathermal contact with a heat reservoir T = T R {\displaystyle T=T_{R}} , D F = 0 {\displaystyle DF=0} and D 2 F > 0 {\displaystyle D^{2}F>0} , namely at equilibrium the Helmholtz potential is a minimum. [ 25 ] Since, like g ( p , T ) {\displaystyle g(p,T)} , the molar Helmholtz function f ( v , T ) {\displaystyle f(v,T)} is also a potential function whose differential is, [ 26 ] d f = ∂ v f | T d v + ∂ T f | v d T = − p d v − s d T , {\displaystyle df=\partial _{v}f|_{T}\,dv+\partial _{T}f|_{v}\,dT=-p\,dv-s\,dT,}
this minimum principle leads to the stability condition ∂ 2 f / ∂ v 2 | T = − ∂ p / ∂ v | T > 0 {\displaystyle \partial ^{2}f/\partial v^{2}|_{T}=-\partial p/\partial v|_{T}>0} . [ 27 ] This condition requires that at any stable state of the system the function f {\displaystyle f} is strictly convex , namely that in its vicinity the curve lies on or above its tangent. [ 28 ] Moreover, for those states the previous stability condition for the pressure is necessarily satisfied as well.
A plot of this function for the same subcritical isotherm of the vdW equation as Figs. 1 and 2 is shown in Fig. 3. Included in this figure is the (dashed/solid) straight line that has a double (common) tangent with the curve of the function f {\displaystyle f} at B and F. This straight line is, f = f 0 + ∂ v f T v = f 0 − p v {\displaystyle f=f_{0}+\partial _{v}f_{T}v=f_{0}-pv} , with p {\displaystyle p} constant, which can be written as f 0 = f + p v = g {\displaystyle f_{0}=f+pv=g} . The last equality follows from the relation f = u − T s {\displaystyle f=u-Ts} , [ 29 ] together with g = u − T s + p v {\displaystyle g=u-Ts+pv} . [ 18 ] All this means that every point on the line has the same values of g , p , T {\displaystyle g,p,T} , in particular the points B and F, which produces the Gibbs condition for material equilibrium g f = g g {\displaystyle g_{f}=g_{\text{g}}} as well as eqality of temperature and pressure. [ 30 ] Therefore this construction is equivalent to both the Gibbs conditions and the Maxwell construction.
This construction, based on f ( v , T ) {\displaystyle f(v,T)} defined earlier by Gibbs, [ 31 ] [ 32 ] was originally used by van der Waals (he called it both a double and common tangent), [ 33 ] because it could be easily extended to include binary fluid mixtures for which an isotherm of f ( v , x , T ) {\displaystyle f(v,x,T)} , with x = N 1 / ( N 1 + N 2 ) {\displaystyle x=N_{1}/(N_{1}+N_{2})} a composition variable, forms a surface that can have a common tangent plane. It has subsequently become a popular way to treat phase change problems in mixtures. [ 34 ] [ 35 ] [ 36 ]
From the van der Waals equation applied to the saturated liquid, v f < v m i n {\displaystyle v_{f}<v_{\rm {min}}} , and vapor, v g > v m a x {\displaystyle v_{\text{g}}>v_{\rm {max}}} , states p s = R T s v f − b − a v f 2 p s = R T s v g − b − a v g 2 {\displaystyle p_{s}={\frac {RT_{s}}{v_{f}-b}}-{\frac {a}{v_{f}^{2}}}\qquad p_{s}={\frac {RT_{s}}{v_{\text{g}}-b}}-{\frac {a}{{v_{\text{g}}}^{2}}}}
These two equations specify 4 variables so they can be solved for p s , T s {\displaystyle p_{s},T_{s}} in terms of v f , v g {\displaystyle v_{f},v_{\text{g}}} . This results in p s = p ∗ v ∗ 2 [ v f v g − v ∗ ( v f + v g ) ] v f 2 v g 2 T s = T ∗ v ∗ ( v f + v g ) ( v f − v ∗ ) ( v g − v ∗ ) v f 2 v g 2 {\displaystyle p_{s}=p^{*}{\frac {v^{*2}[v_{f}v_{\text{g}}-v^{*}(v_{f}+v_{\text{g}})]}{v_{f}^{2}{v_{\text{g}}}^{2}}}\qquad T_{s}=T^{*}{\frac {v^{*}(v_{f}+v_{\text{g}})(v_{f}-v^{*})(v_{\text{g}}-v^{*})}{v_{f}^{2}{v_{\text{g}}}^{2}}}} where p ∗ = a / b 2 {\displaystyle p^{*}=a/b^{2}} , v ∗ = b {\displaystyle v^{*}=b} , and T ∗ = a / ( R b ) {\displaystyle T^{*}=a/(Rb)} are a characteristic pressure, molar volume, and temperature defined by the constants (note that p ∗ v ∗ / T ∗ = R {\displaystyle p^{*}v^{*}/T^{*}=R} ). Applying the Maxwell construction to the van der Waals equation gives − T s ln ( v g − v ∗ v f − v ∗ ) + T ∗ v ∗ ( v g − v f ) v f v g + p s ( v g − v f ) / R = 0 {\displaystyle -T_{s}\ln \left({\frac {v_{\text{g}}-v^{*}}{v_{f}-v^{*}}}\right)+T^{*}{\frac {v^{*}(v_{\text{g}}-v_{f})}{v_{f}v_{\text{g}}}}+p_{s}(v_{\text{g}}-v_{f})/R=0}
These three equations can be solved numerically. This has been done given a value for either T s {\displaystyle T_{s}} or p s {\displaystyle p_{s}} , and tabular results presented; [ 37 ] [ 38 ] however, the equations also admit an analytic parametric solution that, according to Lenkner, [ 39 ] was obtained by Gibbs. Lenkner himself devised a simple, elegant, method to obtain this solution, by eliminating p s {\displaystyle p_{s}} and T s {\displaystyle T_{s}} from the equations, and writing them in terms of a stretched dimensionless density, ϱ = v ∗ / ( v − v ∗ ) {\displaystyle \varrho =v^{*}/(v-v^{*})} , that varies between ∞ {\displaystyle \infty } and 0 as v {\displaystyle v} varies from v ∗ {\displaystyle v^{*}} to ∞ {\displaystyle \infty } ; this produces ln ( ϱ f ϱ g ) = ( ϱ f − ϱ g ) ( ϱ g + ϱ f + 2 ) ϱ f + ϱ g + 2 ϱ f ϱ g {\displaystyle \ln \left({\frac {\varrho _{f}}{\varrho _{\text{g}}}}\right)={\frac {(\varrho _{f}-\varrho _{\text{g}})(\varrho _{\text{g}}+\varrho _{f}+2)}{\varrho _{f}+\varrho _{\text{g}}+2\varrho _{f}\varrho _{\text{g}}}}}
Although transcendental, this equation has a simple analytic, parametric solution obtained by writing the left side of the equation, which is just ( s g − s f ) / R {\displaystyle (s_{\text{g}}-s_{f})/R} , as ln ( ϱ f ϱ g ) = s g − s f R = δ = 2 y {\displaystyle \ln \left({\frac {\varrho _{f}}{\varrho _{\text{g}}}}\right)={\frac {s_{\text{g}}-s_{f}}{R}}=\delta =2y}
Then ϱ f = e δ ϱ g {\displaystyle \varrho _{f}=e^{\delta }\varrho _{\text{g}}} and when used to eliminate ϱ f {\displaystyle \varrho _{f}} from the right hand side a linear equation for ϱ g {\displaystyle \varrho _{\text{g}}} is obtained, whose solution is ϱ g ( y ) = f ( y ) e − y and ϱ f ( y ) = f ( y ) e y where f ( y ) = y cosh y − sinh y sinh y cosh y − y {\displaystyle \varrho _{\text{g}}(y)=f(y)e^{-y}\quad {\mbox{and}}\quad \varrho _{f}(y)=f(y)e^{y}\quad {\mbox{where}}\quad f(y)={\frac {y\cosh y-\sinh y}{\sinh y\cosh y-y}}}
Accordingly, the fundamental variable that specifies all the others in this phase transition process is ( s g − s f ) / ( 2 R ) = y {\displaystyle (s_{\text{g}}-s_{f})/(2R)=y} . This solution to the saturation problem is easily extended to encompass all its variables T s ( y ) = T ∗ 2 f ( y ) [ cosh y + f ( y ) ] g ( y ) 2 p s ( y ) = p ∗ f ( y ) 2 [ 1 − f ( y ) 2 ] g ( y ) 2 {\displaystyle T_{s}(y)=T^{*}{\frac {2f(y)[\cosh y+f(y)]}{g(y)^{2}}}\qquad p_{s}(y)=p^{*}{\frac {f(y)^{2}[1-f(y)^{2}]}{g(y)^{2}}}} v f ( y ) = v ∗ 1 + f ( y ) e y f ( y ) e y v g ( y ) = v ∗ 1 + f ( y ) e − y f ( y ) e − y {\displaystyle v_{f}(y)=v^{*}{\frac {1+f(y)e^{y}}{f(y)e^{y}}}\quad \quad \quad \quad \quad \quad v_{g}(y)=v^{*}{\frac {1+f(y)e^{-y}}{f(y)e^{-y}}}} where g ( y ) = [ 1 + ϱ g ( y ) ] [ 1 + ϱ f ( y ) ] = 1 + 2 f ( y ) cosh y + f ( y ) 2 {\displaystyle g(y)=[1+\varrho _{\text{g}}(y)][1+\varrho _{f}(y)]=1+2f(y)\cosh y+f(y)^{2}} .
The values of all other property discontinuities across the saturation curve also follow from this solution. [ 40 ]
These functions define the coexistence curve which is the locus of the saturated liquid and saturated vapor states of the van der Waals fluid. In Fig. 4 this curve is plotted in blue together with the spinodal curve in black, calculated from T = T ∗ ( v ~ − 1 ) 2 / v ~ 3 p = p ∗ ( v ~ − 2 ) / v ~ 3 {\displaystyle T=T^{*}({\tilde {v}}-1)^{2}/{\tilde {v}}^{3}\quad \quad p=p^{*}({\tilde {v}}-2)/{\tilde {v}}^{3}} where v ~ = v / v ∗ {\displaystyle {\tilde {v}}=v/v^{*}} is a parameter. The variables used in making these plots are the reduced (dimensionless) variables, p r = p / p c {\displaystyle p_{r}=p/p_{c}} , v r = v / v c {\displaystyle v_{r}=v/v_{c}} , and T r = T / T c {\displaystyle T_{r}=T/T_{c}} where the c {\displaystyle c} subscripted quantities are the critical point values. They are defined by, ∂ p / ∂ v | T = 0 {\displaystyle \partial p/\partial v|_{T}=0} , and ∂ 2 p / ∂ v 2 | T = 0 {\displaystyle \partial ^{2}p/\partial _{v^{2}}|_{T}=0} at the critical point, [ 41 ] and are measurable quantities. The relations p ∗ / p c = 27 {\displaystyle p^{*}/p_{c}=27} , v ∗ / v c = 1 / 3 {\displaystyle v^{*}/v_{c}=1/3} , T ∗ / T c = 27 / 8 {\displaystyle T^{*}/T_{c}=27/8} are used to convert the star quantities in the solution to the c {\displaystyle c} quantities used in the figures. The curve agrees completely with the numerical results referenced earlier. In the region inside the spinodal curve there are two states at each point, one stable and one metastable, either superheated liquid to the right of the blue curve, or subcooled vapor to the left, while outside the spinodal curve there is one stable state at each point. In Fig. 5 the region under the (dot dash black) spinodal curve contains no homogeneous stable states while between the (dot dash red) coexistence and spinodal curves there is one metastable state at every point, and outside the coexistence curve there is one stable state at each point. The two blue and two green circles denote the saturated liquid and vapor states on their respective isotherms. There are also observed heterogeneous states everywhere under the coexistence curve that satisfy the lever rule; however, they are not homogeneous states of the van der Waals equation, so their existence, indicated by horizontal lines connecting the saturation points on each subcritical isotherm, is not displayed. Also the abscissa in this figure is logarithmic, not linear, in order to show more of the vapor region at large v r {\displaystyle v_{r}} without excessively compressing the liquid and unstable regions at small v r {\displaystyle v_{r}} ; however, this device distorts areas, so the two areas I and II in Fig.1 would not appear equal here.
Over the parameter range 0 ≤ y < ∞ {\displaystyle 0\leq y<\infty } , f ( y ) {\displaystyle f(y)} decreases monotonically from f ( 0 ) = 1 {\displaystyle f(0)=1} and approaches 0 as f ( y ) ∼ 2 ( y − 1 ) e − y {\displaystyle f(y)\sim 2(y-1)e^{-y}} in the limit y → ∞ {\displaystyle y\rightarrow \infty } . Therefore ϱ f ( 0 ) = ϱ g ( 0 ) = 1 / 2 {\displaystyle \varrho _{f}(0)=\varrho _{\text{g}}(0)=1/2} and in the limit y → ∞ {\displaystyle y\rightarrow \infty } , ϱ f ( y ) ∼ 2 ( y − 1 ) {\displaystyle \varrho _{f}(y)\sim 2(y-1)} and ϱ g ( y ) ∼ 2 ( y − 1 ) e − 2 y {\displaystyle \varrho _{\text{g}}(y)\sim 2(y-1)e^{-2y}} . The behavior of p s ( y ) {\displaystyle p_{s}(y)} and T s ( y ) {\displaystyle T_{s}(y)} follow from the equations. Both these properties also decrease monotonically from p s ( 0 ) = p ∗ / 27 {\displaystyle p_{s}(0)=p^{*}/27} and T s ( 0 ) = 8 T ∗ / 27 {\displaystyle T_{s}(0)=8T^{*}/27} , and approach 0 as p s ∼ p ∗ ϱ g / ϱ f {\displaystyle p_{s}\sim p^{*}\varrho _{\text{g}}/\varrho _{f}} and T s ∼ T ∗ / ϱ f {\displaystyle T_{s}\sim T^{*}/\varrho _{f}} in the limit y → ∞ {\displaystyle y\rightarrow \infty } . Note from these that p s ∼ ( p ∗ / T ∗ ) T s ϱ g ∼ ( p ∗ v ∗ / T ∗ ) T s ρ g = ρ g R T s {\displaystyle p_{s}\sim (p^{*}/T^{*})T_{s}\varrho _{\text{g}}\sim (p^{*}v^{*}/T^{*})T_{s}\rho _{\text{g}}=\rho _{\text{g}}RT_{s}} ; the van der Waals saturated vapor is an ideal gas in this limit. To paraphrase Sommerfeld, it is remarkable that the theory due to van der Waals is able to predict that when s g − s f ≫ 2 R {\displaystyle s_{\text{g}}-s_{f}\gg 2R} the saturated vapor behaves like an ideal gas; the saturated vapor of real gases behave exactly this way.
In addition for T r < 27 / 32 = 0.84375 {\displaystyle T_{r}<27/32=0.84375} the liquid spinodal point occurs at a negative pressure, and the isotherm T r = 0.8 {\displaystyle T_{r}=0.8} is included in Fig. 4 to illustrate this point. This means that some part of those liquid metastable states are in tension, and the lower the temperature the greater the tensile stress. Although this seems counterintuitive it is known that under some circumstances liquids can support tension. Tien and Lienhard [ 42 ] noted this and wrote:
The van der Waals equation predicts that at low temperatures liquids sustain enormous tension – a fact that has led some authors to take the equation lightly. In recent years measurements have been made that reveal this to be entirely correct. [ 43 ] Liquids that are clean and free of dissolved gas can be subjected to tensions greater in magnitude than p c {\displaystyle p_{\text{c}}} .
This is another interesting feature of the van der Waals theory. | https://en.wikipedia.org/wiki/Maxwell_construction |
A Maxwell model is the most simple model viscoelastic material showing properties of a typical liquid. [ 1 ] It shows viscous flow on the long timescale, but additional elastic resistance to fast deformations. [ 2 ] It is named for James Clerk Maxwell who proposed the model in 1867. [ 3 ] [ 4 ] It is also known as a Maxwell fluid. A generalization of the scalar relation to a tensor equation lacks motivation from more microscopic models and does not comply with the concept of material objectivity. However, these criteria are fulfilled by the Upper-convected Maxwell model .
The Maxwell model is represented by a purely viscous damper and a purely elastic spring connected in series, [ 5 ] as shown in the diagram. If, instead, we connect these two elements in parallel, [ 5 ] we get the generalized model of a solid Kelvin–Voigt material .
In Maxwell configuration, under an applied axial stress, the total stress, σ T o t a l {\displaystyle \sigma _{\mathrm {Total} }} and the total strain, ε T o t a l {\displaystyle \varepsilon _{\mathrm {Total} }} can be defined as follows: [ 2 ]
where the subscript D indicates the stress–strain in the damper and the subscript S indicates the stress–strain in the spring. Taking the derivative of strain with respect to time, we obtain:
where E is the elastic modulus and η is the material coefficient of viscosity. This model describes the damper as a Newtonian fluid and models the spring with Hooke's law .
In a Maxwell material, stress σ , strain ε and their rates of change with respect to time t are governed by equations of the form: [ 2 ]
or, in dot notation:
The equation can be applied either to the shear stress or to the uniform tension in a material. In the former case, the viscosity corresponds to that for a Newtonian fluid . In the latter case, it has a slightly different meaning relating stress and rate of strain.
The model is usually applied to the case of small deformations. For the large deformations we should include some geometrical non-linearity. For the simplest way of generalizing the Maxwell model, refer to the upper-convected Maxwell model .
If a Maxwell material is suddenly deformed and held to a strain of ε 0 {\displaystyle \varepsilon _{0}} , then the stress decays on a characteristic timescale of η E {\displaystyle {\frac {\eta }{E}}} , known as the relaxation time . The phenomenon is known as stress relaxation .
The picture shows dependence of dimensionless stress σ ( t ) E ε 0 {\displaystyle {\frac {\sigma (t)}{E\varepsilon _{0}}}} upon dimensionless time E η t {\displaystyle {\frac {E}{\eta }}t} :
If we free the material at time t 1 {\displaystyle t_{1}} , then the elastic element will spring back by the value of
Since the viscous element would not return to its original length, the irreversible component of deformation can be simplified to the expression below:
If a Maxwell material is suddenly subjected to a stress σ 0 {\displaystyle \sigma _{0}} , then the elastic element would suddenly deform and the viscous element would deform with a constant rate:
If at some time t 1 {\displaystyle t_{1}} we released the material, then the deformation of the elastic element would be the spring-back deformation and the deformation of the viscous element would not change:
The Maxwell model does not exhibit creep since it models strain as linear function of time.
If a small stress is applied for a sufficiently long time, then the irreversible strains become large. Thus, Maxwell material is a type of liquid.
If a Maxwell material is subject to a constant strain rate ϵ ˙ {\displaystyle {\dot {\epsilon }}} then the stress increases, reaching a constant value of
σ = η ε ˙ {\displaystyle \sigma =\eta {\dot {\varepsilon }}}
In general
σ ( t ) = η ε ˙ ( 1 − e − E t / η ) {\displaystyle \sigma (t)=\eta {\dot {\varepsilon }}(1-e^{-Et/\eta })}
The complex dynamic modulus of a Maxwell material would be:
Thus, the components of the dynamic modulus are :
and
The picture shows relaxational spectrum for Maxwell material. The relaxation time constant is τ ≡ η / E {\displaystyle \tau \equiv \eta /E} . | https://en.wikipedia.org/wiki/Maxwell_model |
The Maxwell–Bloch equations , also called the optical Bloch equations [ 1 ] describe the dynamics of a two-state quantum system interacting with the electromagnetic mode of an optical resonator. They are analogous to (but not at all equivalent to) the Bloch equations which describe the motion of the nuclear magnetic moment in an electromagnetic field. The equations can be derived either semiclassically or with the field fully quantized when certain approximations are made.
The derivation of the semi-classical optical Bloch equations is nearly identical to solving the two-state quantum system (see the discussion there). However, usually one casts these equations into a density matrix form. The system we are dealing with can be described by the wave function:
The density matrix is
(other conventions are possible; this follows the derivation in Metcalf (1999)). [ 2 ] One can now solve the Heisenberg equation of motion, or translate the results from solving the Schrödinger equation into density matrix form. One arrives at the following equations, including spontaneous emission:
In the derivation of these formulae, we define ρ ¯ g e ≡ ρ g e e − i δ t {\displaystyle {\bar {\rho }}_{ge}\equiv \rho _{ge}e^{-i\delta t}} and ρ ¯ e g ≡ ρ e g e i δ t {\displaystyle {\bar {\rho }}_{eg}\equiv \rho _{eg}e^{i\delta t}} . It was also explicitly assumed that spontaneous emission is described by an exponential decay of the coefficient ρ e g ( t ) {\displaystyle \rho _{eg}(t)} with decay constant γ 2 {\displaystyle {\frac {\gamma }{2}}} . Ω {\displaystyle \Omega } is the Rabi frequency , which is
and δ = ω − ω 0 {\displaystyle \delta =\omega -\omega _{0}} is the detuning and measures how far the light frequency, ω {\displaystyle \omega } , is from the transition, ω 0 {\displaystyle \omega _{0}} . Here, d → g , e {\displaystyle {\vec {d}}_{g,e}} is the transition dipole moment for the g → e {\displaystyle g\rightarrow e} transition and E → 0 = ϵ ^ E 0 {\displaystyle {\vec {E}}_{0}={\hat {\epsilon }}E_{0}} is the vector electric field amplitude including the polarization (in the sense E → = E 0 → 2 e − i ω t + c . c . {\displaystyle {\vec {E}}={\frac {\vec {E_{0}}}{2}}e^{-i\omega t}+c.c.} ).
Beginning with the Jaynes–Cummings Hamiltonian under coherent drive
where a {\displaystyle a} is the lowering operator for the cavity field, and σ = 1 2 ( σ x − i σ y ) {\displaystyle \sigma ={\frac {1}{2}}\left(\sigma _{x}-i\sigma _{y}\right)} is the atomic lowering operator written as a combination of Pauli matrices . The time dependence can be removed by transforming the wavefunction according to | ψ ⟩ → e − i ω l t ( a † a + σ † σ ) | ψ ⟩ {\displaystyle |\psi \rangle \rightarrow \operatorname {e} ^{-i\omega _{l}t\left(a^{\dagger }a+\sigma ^{\dagger }\sigma \right)}|\psi \rangle } , leading to a transformed Hamiltonian
where Δ i = ω i − ω l {\displaystyle \Delta _{i}=\omega _{i}-\omega _{l}} . As it stands now, the Hamiltonian has four terms. The first two are the self energy of the atom (or other two level system) and field. The third term is an energy conserving interaction term allowing the cavity and atom to exchange population and coherence. These three terms alone give rise to the Jaynes-Cummings ladder of dressed states, and the associated anharmonicity in the energy spectrum. The last term models coupling between the cavity mode and a classical field, i.e. a laser. The drive strength J {\displaystyle J} is given in terms of the power transmitted through the empty two-sided cavity as J = 2 P ( Δ c 2 + κ 2 ) / ( ω c κ ) {\displaystyle J={\sqrt {2P(\Delta _{c}^{2}+\kappa ^{2})/(\omega _{c}\kappa )}}} , where 2 κ {\displaystyle 2\kappa } is the cavity linewidth. This brings to light a crucial point concerning the role of dissipation in the operation of a laser or other CQED device; dissipation is the means by which the system (coupled atom/cavity) interacts with its environment. To this end, dissipation is included by framing the problem in terms of the master equation, where the last two terms are in the Lindblad form
The equations of motion for the expectation values of the operators can be derived from the master equation by the formulas ⟨ O ⟩ = tr ( O ρ ) {\displaystyle \langle O\rangle =\operatorname {tr} \left(O\rho \right)} and ⟨ O ˙ ⟩ = tr ( O ρ ˙ ) {\displaystyle \langle {\dot {O}}\rangle =\operatorname {tr} \left(O{\dot {\rho }}\right)} . The equations of motion for ⟨ a ⟩ {\displaystyle \langle a\rangle } , ⟨ σ ⟩ {\displaystyle \langle \sigma \rangle } , and ⟨ σ z ⟩ {\displaystyle \langle \sigma _{z}\rangle } , the cavity field, atomic coherence, and atomic inversion respectively, are
At this point, we have produced three of an infinite ladder of coupled equations. As can be seen from the third equation, higher order correlations are necessary. The differential equation for the time evolution of ⟨ a † σ ⟩ {\displaystyle \langle a^{\dagger }\sigma \rangle } will contain expectation values of higher order products of operators, thus leading to an infinite set of coupled equations. We heuristically make the approximation that the expectation value of a product of operators is equal to the product of expectation values of the individual operators. This is akin to assuming that the operators are uncorrelated, and is a good approximation in the classical limit. It turns out that the resulting equations give the correct qualitative behavior even in the single excitation regime. Additionally, to simplify the equations we make the following replacements
And the Maxwell–Bloch equations can be written in their final form
Within the dipole approximation and rotating-wave approximation , the dynamics of the atomic density matrix, when interacting with laser field, is described by optical Bloch equation, whose effect can be divided into two parts: [ 3 ] optical dipole force and scattering force. [ 4 ] | https://en.wikipedia.org/wiki/Maxwell–Bloch_equations |
In statistical mechanics , Maxwell–Boltzmann statistics describes the distribution of classical material particles over various energy states in thermal equilibrium . It is applicable when the temperature is high enough or the particle density is low enough to render quantum effects negligible.
The expected number of particles with energy ε i {\displaystyle \varepsilon _{i}} for Maxwell–Boltzmann statistics is
where:
Equivalently, the number of particles is sometimes expressed as
where the index i now specifies a particular state rather than the set of all states with energy ε i {\displaystyle \varepsilon _{i}} , and Z = ∑ i e − ε i / k T {\textstyle Z=\sum _{i}e^{-\varepsilon _{i}/kT}} .
Maxwell–Boltzmann statistics grew out of the Maxwell–Boltzmann distribution, most likely as a distillation of the underlying technique. [ dubious – discuss ] The distribution was first derived by Maxwell in 1860 on heuristic grounds. Boltzmann later, in the 1870s, carried out significant investigations into the physical origins of this distribution. The distribution can be derived on the ground that it maximizes the entropy of the system.
Maxwell–Boltzmann distribution and Maxwell–Boltzmann statistics are closely related. Maxwell–Boltzmann statistics is a more general principle in statistical mechanics that describes the probability of a classical particle being in a particular energy state:
where:
Maxwell–Boltzmann distribution is a specific application of Maxwell–Boltzmann statistics to the kinetic energies of gas particles. The distribution of velocities (or speeds) of particles in an ideal gas follows from the statistical assumption that the energy levels of a gas molecule are given by its kinetic energy:
where:
We can deduce the Maxwell–Boltzmann distribution from Maxwell–Boltzmann statistics, starting with the Maxwell-Boltzmann probability for energy states and substituting the kinetic energy E = 1 2 m v 2 {\displaystyle E={\frac {1}{2}}mv^{2}} to express the probability in terms of velocity:
In 3D, this is proportional to the surface area of a sphere, 4 π v 2 {\displaystyle 4\pi v^{2}} . Thus, the probability density function (PDF) for speed v {\displaystyle v} becomes:
To find the normalization constant C {\displaystyle C} , we require the integral of the probability density function over all possible speeds to be unity:
Evaluating the integral using the known result ∫ 0 ∞ v 2 e − a v 2 d v = π 4 a 3 / 2 {\displaystyle \int _{0}^{\infty }v^{2}e^{-av^{2}}dv={\frac {\sqrt {\pi }}{4a^{3/2}}}} , with a = m 2 k B T {\displaystyle a={\frac {m}{2k_{B}T}}} , we obtain:
Therefore, the Maxwell–Boltzmann speed distribution is:
Maxwell–Boltzmann statistics is used to derive the Maxwell–Boltzmann distribution of an ideal gas. However, it can also be used to extend that distribution to particles with a different energy–momentum relation , such as relativistic particles (resulting in Maxwell–Jüttner distribution ), and to other than three-dimensional spaces.
Maxwell–Boltzmann statistics is often described as the statistics of "distinguishable" classical particles. In other words, the configuration of particle A in state 1 and particle B in state 2 is different from the case in which particle B is in state 1 and particle A is in state 2. This assumption leads to the proper (Boltzmann) statistics of particles in the energy states, but yields non-physical results for the entropy, as embodied in the Gibbs paradox .
At the same time, there are no real particles that have the characteristics required by Maxwell–Boltzmann statistics. Indeed, the Gibbs paradox is resolved if we treat all particles of a certain type (e.g., electrons, protons, photons, etc.) as principally indistinguishable. Once this assumption is made, the particle statistics change. The change in entropy in the entropy of mixing example may be viewed as an example of a non-extensive entropy resulting from the distinguishability of the two types of particles being mixed.
Quantum particles are either bosons (following Bose–Einstein statistics ) or fermions (subject to the Pauli exclusion principle , following instead Fermi–Dirac statistics ). Both of these quantum statistics approach the Maxwell–Boltzmann statistics in the limit of high temperature and low particle density.
Maxwell–Boltzmann statistics can be derived in various statistical mechanical thermodynamic ensembles: [ 1 ]
In each case it is necessary to assume that the particles are non-interacting, and that multiple particles can occupy the same state and do so independently.
Suppose we have a container with a huge number of very small particles all with identical physical characteristics (such as mass, charge, etc.). Let's refer to this as the system . Assume that though the particles have identical properties, they are distinguishable. For example, we might identify each particle by continually observing their trajectories, or by placing a marking on each one, e.g., drawing a different number on each one as is done with lottery balls.
The particles are moving inside that container in all directions with great speed. Because the particles are speeding around, they possess some energy. The Maxwell–Boltzmann distribution is a mathematical function that describes about how many particles in the container have a certain energy. More precisely, the Maxwell–Boltzmann distribution gives the non-normalized probability (this means that the probabilities do not add up to 1) that the state corresponding to a particular energy is occupied.
In general, there may be many particles with the same amount of energy ε {\displaystyle \varepsilon } . Let the number of particles with the same energy ε 1 {\displaystyle \varepsilon _{1}} be N 1 {\displaystyle N_{1}} , the number of particles possessing another energy ε 2 {\displaystyle \varepsilon _{2}} be N 2 {\displaystyle N_{2}} , and so forth for all the possible energies { ε i ∣ i = 1 , 2 , 3 , … } . {\displaystyle \{\varepsilon _{i}\mid i=1,2,3,\ldots \}.} To describe this situation, we say that N i {\displaystyle N_{i}} is the occupation number of the energy level i . {\displaystyle i.} If we know all the occupation numbers { N i ∣ i = 1 , 2 , 3 , … } , {\displaystyle \{N_{i}\mid i=1,2,3,\ldots \},} then we know the total energy of the system. However, because we can distinguish between which particles are occupying each energy level, the set of occupation numbers { N i ∣ i = 1 , 2 , 3 , … } {\displaystyle \{N_{i}\mid i=1,2,3,\ldots \}} does not completely describe the state of the system. To completely describe the state of the system, or the microstate , we must specify exactly which particles are in each energy level. Thus when we count the number of possible states of the system, we must count each and every microstate, and not just the possible sets of occupation numbers.
To begin with, assume that there is only one state at each energy level i {\displaystyle i} (there is no degeneracy). What follows next is a bit of combinatorial thinking which has little to do in accurately describing the reservoir of particles. For instance, let's say there is a total of k {\displaystyle k} boxes labelled a , b , … , k {\displaystyle a,b,\ldots ,k} . With the concept of combination , we could calculate how many ways there are to arrange N {\displaystyle N} into the set of boxes, where the order of balls within each box isn’t tracked. First, we select N a {\displaystyle N_{a}} balls from a total of N {\displaystyle N} balls to place into box a {\displaystyle a} , and continue to select for each box from the remaining balls, ensuring that every ball is placed in one of the boxes. The total number of ways that the balls can be arranged is
As every ball has been placed into a box, ( N − N a − N b − ⋯ − N k ) ! = 0 ! = 1 {\displaystyle (N-N_{a}-N_{b}-\cdots -N_{k})!=0!=1} , and we simplify the expression as
This is just the multinomial coefficient , the number of ways of arranging N items into k boxes, the l -th box holding N l items, ignoring the permutation of items in each box.
Now, consider the case where there is more than one way to put N i {\displaystyle N_{i}} particles in the box i {\displaystyle i} (i.e. taking the degeneracy problem into consideration). If the i {\displaystyle i} -th box has a "degeneracy" of g i {\displaystyle g_{i}} , that is, it has g i {\displaystyle g_{i}} "sub-boxes" ( g i {\displaystyle g_{i}} boxes with the same energy ε i {\displaystyle \varepsilon _{i}} . These states/boxes with the same energy are called degenerate states.), such that any way of filling the i {\displaystyle i} -th box where the number in the sub-boxes is changed is a distinct way of filling the box, then the number of ways of filling the i -th box must be increased by the number of ways of distributing the N i {\displaystyle N_{i}} objects in the g i {\displaystyle g_{i}} "sub-boxes". The number of ways of placing N i {\displaystyle N_{i}} distinguishable objects in g i {\displaystyle g_{i}} "sub-boxes" is g i N i {\displaystyle g_{i}^{N_{i}}} (the first object can go into any of the g i {\displaystyle g_{i}} boxes, the second object can also go into any of the g i {\displaystyle g_{i}} boxes, and so on). Thus the number of ways W {\displaystyle W} that a total of N {\displaystyle N} particles can be classified into energy levels according to their energies, while each level i {\displaystyle i} having g i {\displaystyle g_{i}} distinct states such that the i -th level accommodates N i {\displaystyle N_{i}} particles is:
This is the form for W first derived by Boltzmann . Boltzmann's fundamental equation S = k ln W {\displaystyle S=k\,\ln W} relates the thermodynamic entropy S to the number of microstates W , where k is the Boltzmann constant . It was pointed out by Gibbs however, that the above expression for W does not yield an extensive entropy, and is therefore faulty. This problem is known as the Gibbs paradox . The problem is that the particles considered by the above equation are not indistinguishable . In other words, for two particles ( A and B ) in two energy sublevels the population represented by [A,B] is considered distinct from the population [B,A] while for indistinguishable particles, they are not. If we carry out the argument for indistinguishable particles, we are led to the Bose–Einstein expression for W :
The Maxwell–Boltzmann distribution follows from this Bose–Einstein distribution for temperatures well above absolute zero, implying that g i ≫ 1 {\displaystyle g_{i}\gg 1} . The Maxwell–Boltzmann distribution also requires low density, implying that g i ≫ N i {\displaystyle g_{i}\gg N_{i}} . Under these conditions, we may use Stirling's approximation for the factorial:
to write:
Using the fact that ( 1 + N i / g i ) g i ≈ e N i {\displaystyle (1+N_{i}/g_{i})^{g_{i}}\approx e^{N_{i}}} for g i ≫ N i {\displaystyle g_{i}\gg N_{i}} we can again use Stirling's approximation to write:
This is essentially a division by N ! of Boltzmann's original expression for W , and this correction is referred to as correct Boltzmann counting .
We wish to find the N i {\displaystyle N_{i}} for which the function W {\displaystyle W} is maximized, while considering the constraint that there is a fixed number of particles ( N = ∑ N i ) {\textstyle \left(N=\sum N_{i}\right)} and a fixed energy ( E = ∑ N i ε i ) {\textstyle \left(E=\sum N_{i}\varepsilon _{i}\right)} in the container. The maxima of W {\displaystyle W} and ln ( W ) {\displaystyle \ln(W)} are achieved by the same values of N i {\displaystyle N_{i}} and, since it is easier to accomplish mathematically, we will maximize the latter function instead. We constrain our solution using Lagrange multipliers forming the function:
Finally
In order to maximize the expression above we apply Fermat's theorem (stationary points) , according to which local extrema, if exist, must be at critical points (partial derivatives vanish):
By solving the equations above ( i = 1 … n {\displaystyle i=1\ldots n} ) we arrive to an expression for N i {\displaystyle N_{i}} :
Substituting this expression for N i {\displaystyle N_{i}} into the equation for ln W {\displaystyle \ln W} and assuming that N ≫ 1 {\displaystyle N\gg 1} yields:
or, rearranging:
Boltzmann realized that this is just an expression of the Euler-integrated fundamental equation of thermodynamics . Identifying E as the internal energy, the Euler-integrated fundamental equation states that :
where T is the temperature , P is pressure, V is volume , and μ is the chemical potential . Boltzmann's equation S = k ln W {\displaystyle S=k\ln W} is the realization that the entropy is proportional to ln W {\displaystyle \ln W} with the constant of proportionality being the Boltzmann constant . Using the ideal gas equation of state ( PV = NkT ), It follows immediately that β = 1 / k T {\displaystyle \beta =1/kT} and α = − μ / k T {\displaystyle \alpha =-\mu /kT} so that the populations may now be written:
Note that the above formula is sometimes written:
where z = exp ( μ / k T ) {\displaystyle z=\exp(\mu /kT)} is the absolute activity .
Alternatively, we may use the fact that
to obtain the population numbers as
where Z is the partition function defined by:
In an approximation where ε i is considered to be a continuous variable, the Thomas–Fermi approximation yields a continuous degeneracy g proportional to ε {\displaystyle {\sqrt {\varepsilon }}} so that:
which is just the Maxwell–Boltzmann distribution for the energy.
In the above discussion, the Boltzmann distribution function was obtained via directly analysing the multiplicities of a system. Alternatively, one can make use of the canonical ensemble . In a canonical ensemble, a system is in thermal contact with a reservoir. While energy is free to flow between the system and the reservoir, the reservoir is thought to have infinitely large heat capacity as to maintain constant temperature, T , for the combined system.
In the present context, our system is assumed to have the energy levels ε i {\displaystyle \varepsilon _{i}} with degeneracies g i {\displaystyle g_{i}} . As before, we would like to calculate the probability that our system has energy ε i {\displaystyle \varepsilon _{i}} .
If our system is in state s 1 {\displaystyle \;s_{1}} , then there would be a corresponding number of microstates available to the reservoir. Call this number Ω R ( s 1 ) {\displaystyle \;\Omega _{R}(s_{1})} . By assumption, the combined system (of the system we are interested in and the reservoir) is isolated, so all microstates are equally probable. Therefore, for instance, if Ω R ( s 1 ) = 2 Ω R ( s 2 ) {\displaystyle \;\Omega _{R}(s_{1})=2\;\Omega _{R}(s_{2})} , we can conclude that our system is twice as likely to be in state s 1 {\displaystyle \;s_{1}} than s 2 {\displaystyle \;s_{2}} . In general, if P ( s i ) {\displaystyle \;P(s_{i})} is the probability that our system is in state s i {\displaystyle \;s_{i}} ,
Since the entropy of the reservoir S R = k ln Ω R {\displaystyle \;S_{R}=k\ln \Omega _{R}} , the above becomes
Next we recall the thermodynamic identity (from the first law of thermodynamics ):
In a canonical ensemble, there is no exchange of particles, so the d N R {\displaystyle dN_{R}} term is zero. Similarly, d V R = 0. {\displaystyle dV_{R}=0.} This gives
where U R ( s i ) {\displaystyle U_{R}(s_{i})} and E ( s i ) {\displaystyle E(s_{i})} denote the energies of the reservoir and the system at s i {\displaystyle s_{i}} , respectively. For the second equality we have used the conservation of energy. Substituting into the first equation relating P ( s 1 ) , P ( s 2 ) {\displaystyle P(s_{1}),\;P(s_{2})} :
which implies, for any state s of the system
where Z is an appropriately chosen "constant" to make total probability 1. ( Z is constant provided that the temperature T is invariant.)
where the index s runs through all microstates of the system. Z is sometimes called the Boltzmann sum over states (or "Zustandssumme" in the original German). If we index the summation via the energy eigenvalues instead of all possible states, degeneracy must be taken into account. The probability of our system having energy ε i {\displaystyle \varepsilon _{i}} is simply the sum of the probabilities of all corresponding microstates:
where, with obvious modification,
this is the same result as before.
Comments on this derivation:
The Maxwell-Boltzmann distribution describes the probability of a particle occupying an energy state E in a classical system. It takes the following form:
For a system of indistinguishable particles, we start with the canonical ensemble formalism.
In a system with energy levels { E i } {\displaystyle \{E_{i}\}} , let n i {\displaystyle n_{i}} be the number of particles in state i . The total energy and particle number are:
For a specific configuration { n i } {\displaystyle \{n_{i}\}} , the probability in the canonical ensemble is:
The factor N ! ∏ i n i ! {\displaystyle {\frac {N!}{\prod _{i}n_{i}!}}} accounts for the number of ways to distribute N indistinguishable particles among the states.
For Maxwell-Boltzmann statistics, we assume that the average occupation number of any state is much less than 1 ( ⟨ n i ⟩ ≪ 1 {\displaystyle \langle n_{i}\rangle \ll 1} ), which leads to:
where μ {\displaystyle \mu } is the chemical potential determined by ∑ i ⟨ n i ⟩ = N {\displaystyle \sum _{i}\langle n_{i}\rangle =N} .
For energy states near the Fermi energy E F {\displaystyle E_{F}} , we can express μ ≈ E F {\displaystyle \mu \approx E_{F}} , giving:
For high energies ( E ≫ E F {\displaystyle E\gg E_{F}} ), this directly gives:
For low energies ( E ≪ E F {\displaystyle E\ll E_{F}} ), using the approximation e − x ≈ 1 − x {\displaystyle e^{-x}\approx 1-x} for small x :
This is the derivation of the Maxwell-Boltzmann distribution in both energy regimes. | https://en.wikipedia.org/wiki/Maxwell–Boltzmann_statistics |
In dielectric spectroscopy , large frequency dependent contributions to the dielectric response, especially at low frequencies, may come from build-ups of charge. This Maxwell–Wagner–Sillars polarization (or often just Maxwell–Wagner polarization ), occurs either at inner dielectric boundary layers on a mesoscopic scale, or at the external electrode-sample interface on a macroscopic scale. In both cases this leads to a separation of charges (such as through a depletion layer ). The charges are often separated over a considerable distance (relative to the atomic and molecular sizes), and the contribution to dielectric loss can therefore be orders of magnitude larger than the dielectric response due to molecular fluctuations. [ 1 ] It is named after the works of James Clerk Maxwell (1891), Karl Willy Wagner (1914) and R. W. Sillars (1937). [ 2 ]
Maxwell-Wagner polarization processes should be taken into account during the investigation of inhomogeneous materials like suspensions or colloids, biological materials, phase separated polymers, blends, and crystalline or liquid crystalline polymers. [ 3 ]
The simplest model for describing an inhomogeneous structure is a double layer arrangement, where each layer is characterized by its permittivity ϵ 1 ′ , ϵ 2 ′ {\displaystyle \epsilon '_{1},\epsilon '_{2}} and its conductivity σ 1 , σ 2 {\displaystyle \sigma _{1},\sigma _{2}} . The relaxation time for such an arrangement is given by τ M W = ϵ 0 ϵ 1 + ϵ 2 σ 1 + σ 2 {\displaystyle \tau _{MW}=\epsilon _{0}{\frac {\epsilon _{1}+\epsilon _{2}}{\sigma _{1}+\sigma _{2}}}} .
Importantly, since the materials' conductivities are in general frequency dependent, this shows that the double layer composite generally has a frequency dependent relaxation time even if the individual layers are characterized by frequency independent permittivities.
A more sophisticated model for treating interfacial polarization was developed by Maxwell [ citation needed ] , and later generalized by Wagner [ 4 ] and Sillars. [ 5 ] Maxwell considered a spherical particle with a dielectric permittivity ϵ 2 ′ {\displaystyle \epsilon '_{2}} and radius R {\displaystyle R} suspended in an infinite medium characterized by ϵ 1 {\displaystyle \epsilon _{1}} . Certain European text books will represent the ϵ 1 {\displaystyle \epsilon _{1}} constant with the Greek letter ω (Omega), sometimes referred to as Doyle's constant. [ 6 ] | https://en.wikipedia.org/wiki/Maxwell–Wagner–Sillars_polarization |
In mathematics, the max–min inequality is as follows:
When equality holds one says that f , W , and Z satisfies a strong max–min property (or a saddle-point property). The example function f ( z , w ) = sin ( z + w ) {\displaystyle \ f(z,w)=\sin(z+w)\ } illustrates that the equality does not hold for every function.
A theorem giving conditions on f , W , and Z which guarantee the saddle point property is called a minimax theorem .
Define g ( z ) ≜ inf w ∈ W f ( z , w ) . {\displaystyle g(z)\triangleq \inf _{w\in W}f(z,w)\ .} For all z ∈ Z {\displaystyle z\in Z} , we get g ( z ) ≤ f ( z , w ) {\textstyle g(z)\leq f(z,w)} for all w ∈ W {\displaystyle w\in W} by definition of the infimum being a lower bound. Next, for all w ∈ W {\textstyle w\in W} , f ( z , w ) ≤ sup z ∈ Z f ( z , w ) {\displaystyle f(z,w)\leq \sup _{z\in Z}f(z,w)} for all z ∈ Z {\textstyle z\in Z} by definition of the supremum being an upper bound. Thus, for all z ∈ Z {\displaystyle z\in Z} and w ∈ W {\displaystyle w\in W} , g ( z ) ≤ f ( z , w ) ≤ sup z ∈ Z f ( z , w ) {\displaystyle g(z)\leq f(z,w)\leq \sup _{z\in Z}f(z,w)} making h ( w ) ≜ sup z ∈ Z f ( z , w ) {\displaystyle h(w)\triangleq \sup _{z\in Z}f(z,w)} an upper bound on g ( z ) {\displaystyle g(z)} for any choice of w ∈ W {\displaystyle w\in W} . Because the supremum is the least upper bound, sup z ∈ Z g ( z ) ≤ h ( w ) {\displaystyle \sup _{z\in Z}g(z)\leq h(w)} holds for all w ∈ W {\displaystyle w\in W} . From this inequality, we also see that sup z ∈ Z g ( z ) {\displaystyle \sup _{z\in Z}g(z)} is a lower bound on h ( w ) {\displaystyle h(w)} . By the greatest lower bound property of infimum, sup z ∈ Z g ( z ) ≤ inf w ∈ W h ( w ) {\displaystyle \sup _{z\in Z}g(z)\leq \inf _{w\in W}h(w)} . Putting all the pieces together, we get
sup z ∈ Z inf w ∈ W f ( z , w ) = sup z ∈ Z g ( z ) ≤ inf w ∈ W h ( w ) = inf w ∈ W sup z ∈ Z f ( z , w ) {\displaystyle \sup _{z\in Z}\inf _{w\in W}f(z,w)=\sup _{z\in Z}g(z)\leq \inf _{w\in W}h(w)=\inf _{w\in W}\sup _{z\in Z}f(z,w)}
which proves the desired inequality. ◼ {\displaystyle \blacksquare } | https://en.wikipedia.org/wiki/Max–min_inequality |
Condorcet methods
Positional voting
Cardinal voting
Quota-remainder methods
Approval-based committees
Fractional social choice
Semi-proportional representation
By ballot type
Pathological response
Strategic voting
Paradoxes of majority rule
Positive results
In social choice theory , May's theorem , also called the general possibility theorem , [ 1 ] says that majority vote is the unique ranked social choice function between two candidates that satisfies the following criteria:
The theorem was first published by Kenneth May in 1952. [1]
Various modifications have been suggested by others since the original publication. If rated voting is allowed, a wide variety of rules satisfy May's conditions, including score voting or highest median voting rules .
Arrow's theorem does not apply to the case of two candidates (when there are trivially no "independent alternatives"), so this possibility result can be seen as the mirror analogue of that theorem. Note that anonymity is a stronger requirement than Arrow's non-dictatorship .
Another way of explaining the fact that simple majority voting can successfully deal with at most two alternatives is to cite Nakamura's theorem. The theorem states that the number of alternatives that a rule can deal with successfully is less than the Nakamura number of the rule. The Nakamura number of simple majority voting is 3, except in the case of four voters. Supermajority rules may have greater Nakamura numbers. [ citation needed ]
Let A and B be two possible choices, often called alternatives or candidates. A preference is then simply a choice of whether A , B , or neither is preferred. [ 1 ] Denote the set of preferences by { A , B , 0 }, where 0 represents neither.
Let N be a positive integer. In this context, a ordinal (ranked) social choice function is a function
which aggregates individuals' preferences into a single preference. [ 1 ] An N - tuple ( R 1 , …, R N ) ∈ { A , B , 0} N of voters' preferences is called a preference profile .
Define a social choice function called simple majority voting as follows: [ 1 ]
May's theorem states that simple majority voting is the unique social welfare function satisfying all three of the following conditions: [ 1 ] | https://en.wikipedia.org/wiki/May's_theorem |
The Maya ICBG bioprospecting controversy took place in 1999–2000, when the International Cooperative Biodiversity Group led by ethnobiologist Dr. Brent Berlin was accused of engaging in unethical forms of bioprospecting ( biopiracy ) by several NGOs and indigenous organizations. The ICBG had as its aim to document the biodiversity of Chiapas , Mexico and the ethnobotanical knowledge of the indigenous Maya people – to ascertain whether there were possibilities of developing medical products based on any of the plants used by the indigenous groups.
While the project had taken many precautions to act ethically in its dealings with the indigenous groups, the project became subject to severe criticisms of the methods used to attain prior informed consent . Among other things critics argued that the project had not devised a strategy for achieving informed consent from the entire community to which they argued the ethnobotanical knowledge belonged, and whom they argued would be affected by its commercialization. The project's directors argued that the knowledge was properly to be considered part of the public domain and therefore open to commercialization, and they argued that they had followed established best practice of ethical conduct in research to the letter. After a public discussion carried out in the media and on internet listservers the project's partners pulled out, and the ICBG was closed down in 2001, two years into its five years of allotted funding.
The Maya ICBG case was among the first to draw attention to the problems of distinguishing between benign forms of bioprospecting and unethical biopiracy, and to the difficulties of securing community participation and prior informed consent for would-be bioprospectors.
In 1993, the National Institute of Health , National Science Foundation and USAID established the International Cooperative Biodiversity Group (ICBG) program to promote collaborative research on biodiversity between American universities and research institutions in countries that harbor unique genetic resources in the form of biodiversity . The basic aim of the program was to benefit both the host community and the global scientific community by discovering and researching the possibilities for new solutions to human health problems based on previously unexplored genetic resources. It therefore, seeks to conserve biodiversity , and to foment, encourage and support sustainable practices of usage of biological resources in the Global South . Projects would be initiated by principal investigators who would apply for a five-year period of funding, and who would establish the terms of the collaboration. [ 1 ]
In 1998, the renowned ethnobotanist Brent Berlin and his wife, Dr. Eloise A. Berlin , founded an International Cooperative Biodiversity Group – the Maya ICGB. [ 2 ] [ 3 ]
The group was intended as a combined bioprospecting and research cooperative between the University of Georgia where the Berlins were employed, the ECOSUR (a university in Chiapas), a small Welsh pharmaceutical company called Molecular Nature ltd., and a newly created NGO called PROMAYA supposed to represent the Indigenous Maya of Chiapas. The two primary investigators had worked for more than 40 years documenting and describing the ethnobotany and medicinal knowledge of the Tzeltal Maya of the Chiapas region. [ 4 ] [ 5 ] [ 6 ]
The aim of the project was to collect and document the ethnobotanical knowledge of the Maya people of Chiapas, one of the world's biodiversity hotspots . [ 7 ]
The NGO PROMAYA was established as a foundation that could receive a percentage of the profits from any marketable products resulting from the research, as well as exercise rights over the uses to which the indigenous knowledge would be put. As such, PROMAYA represented the project's will to comply with valid ethical standards and share rights and benefits with the original holders of the medicinal knowledge. Berlin began the NGO by contributing $30,000, money he had personally received as an award for his research. The benefit share agreement on any profits derived from the project allotted the majority to the Welsh pharmaceutical company, about 12–15 percent to the University of Georgia and 2–5% to the PROMAYA NGO. The plan was that Maya communities could then petition for grants from the NGO, to be used for community development. [ 8 ] [ 9 ]
The project began with an information campaign directed at the Maya communities with which they wished to cooperate. Using the medium of theater they presented the aims and goals of the project to the Maya. The information step was a vital part of the project's attempt to obtain prior informed consent from members of the participating communities. The project made the deliberate decision not to include information about the possibility that profits would eventually be made from the knowledge collected, or information on how any potential benefits would be divided among them, surmising that the chance of this happening was so slim that it would be a better strategy to introduce this issue when and if it were to arise. This decision would later be an important point of criticism by activists claiming that prior informed consent had not been obtained. [ 4 ] [ 10 ] [ 11 ]
Soon after being initiated, the project became the subject of harsh criticisms by indigenous activists and Mexican intellectuals who questioned how knowledge obtained from individual Maya could be patented by researchers or foreign pharmaceutical companies, how the PROMAYA NGO established by the Berlins and under their control could be considered representative of the many different Maya communities in Chiapas, and how it was possible for the knowledge that had been the collective property of the Maya peoples to become suddenly privatized without the prior consent of each of the individual initial holders of the knowledge. Among the most vocal opponents of the project were RAFI, a Canadian NGO, and COMPITCH an organization of indigenous healers. Much of the criticism was circulated on listservers and on internet fora. [ 12 ] [ 13 ] [ 14 ]
The Berlins argued that the establishment of the NGO was the only feasible way of managing benefit sharing with the community and of obtaining prior informed consent , and that since the traditional knowledge was in the public domain among the Maya no individual Maya could expect remuneration. [ 15 ] As tensions mounted, the Mexican partner UNAM withdrew its support for the project, and later the NIH , causing the project to be closed down in 2001 – without having been able to produce any results. [ 10 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ]
No one seriously doubted that Berlin and the ICBG had the best intentions of ethical conduct, nonetheless, there remain serious criticisms of the way in which the project was planned and carried out, and the assumptions on which the project was based have been characterized as naïve. [ 10 ] The Maya ICBG case was among the first to draw attention to the problems of distinguishing between bioprospecting and biopiracy , and to the difficulties of securing community participation and prior informed consent for bioprospectors. [ 11 ] [ 20 ] [ 21 ] [ 22 ] | https://en.wikipedia.org/wiki/Maya_ICBG_bioprospecting_controversy |
The Mayan numeral system was the system to represent numbers and calendar dates in the Maya civilization . It was a vigesimal (base-20) positional numeral system . The numerals are made up of three symbols: zero (a shell), [ 1 ] one (a dot) and five (a bar). For example, thirteen is written as three dots in a horizontal row above two horizontal bars; sometimes it is also written as three vertical dots to the left of two vertical bars. With these three symbols, each of the twenty vigesimal digits could be written.
Numbers after 19 were written vertically in powers of twenty. The Mayan used powers of twenty, just as the Hindu–Arabic numeral system uses powers of ten. [ 2 ]
For example, thirty-three would be written as one dot, above three dots atop two bars. The first dot represents "one twenty" or "1×20", which is added to three dots and two bars, or thirteen. Therefore, (1×20) + 13 = 33.
Upon reaching 20 2 or 400, another row is started (20 3 or 8000, then 20 4 or 160,000, and so on). The number 429 would be written as one dot above one dot above four dots and a bar, or (1×20 2 ) + (1×20 1 ) + 9 = 429.
Other than the bar and dot notation, Maya numerals were sometimes illustrated by face type glyphs or pictures. The face glyph for a number represents the deity associated with the number. These face number glyphs were rarely used, and are mostly seen on some of the most elaborate monumental carvings.
There are different representations of zero in the Dresden Codex , as can be seen at page 43b (which is concerned with the synodic cycle of Mars). [ 3 ] It has been suggested that these pointed, oblong "bread" representations are calligraphic variants of the PET logogram, approximately meaning "circular" or "rounded", and perhaps the basis of a derived noun meaning "totality" or "grouping", such that the representations may be an appropriate marker for a number position which has reached its totality. [ 4 ]
Adding and subtracting numbers below 20 using Mayan numerals is very simple. Addition is performed by combining the numeric symbols at each level:
If five or more dots result from the combination, five dots are removed and replaced by a bar. If four or more bars result, four bars are removed and a dot is added to the next higher row. This also means that the value of 1 bar is 5.
Similarly with subtraction , remove the elements of the subtrahend Symbol from the minuend symbol:
If there are not enough dots in a minuend position, a bar is replaced by five dots. If there are not enough bars, a dot is removed from the next higher minuend symbol in the column and four bars are added to the minuend symbol which is being worked on.
The "Long Count" portion of the Maya calendar uses a variation on the strictly vigesimal numerals to show a Long Count date . In the second position, only the digits up to 17 are used, and the place value of the third position is not 20×20 = 400, as would otherwise be expected, but 18×20 = 360 so that one dot over two zeros signifies 360. Presumably, this is because 360 is roughly the number of days in a year . (The Maya had however a quite accurate estimation of 365.2422 days for the solar year at least since the early Classic era .) [ 5 ] Subsequent positions use all twenty digits and the place values continue as 18×20×20 = 7,200 and 18×20×20×20 = 144,000, etc.
Every known example of large numbers in the Maya system uses this 'modified vigesimal' system, with the third position representing multiples of 18×20. It is reasonable to assume, but not proven by any evidence, that the normal system in use was a pure base-20 system. [ 6 ]
Several Mesoamerican cultures used similar numerals and base-twenty systems and the Mesoamerican Long Count calendar requiring the use of zero as a place-holder. The earliest long count date (on Stela 2 at Chiappa de Corzo, Chiapas ) is from 36 BC. [ a ]
Since the eight earliest Long Count dates appear outside the Maya homeland, [ 7 ] it is assumed that the use of zero and the Long Count calendar predated the Maya, and was possibly the invention of the Olmec . Indeed, many of the earliest Long Count dates were found within the Olmec heartland. However, the Olmec civilization had come to an end by the 4th century BC, several centuries before the earliest known Long Count dates—which suggests that zero was not an Olmec discovery.
Mayan numerals codes in Unicode comprise the block 1D2E0 to 1D2F3 | https://en.wikipedia.org/wiki/Maya_numerals |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.