text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
Epiphenomenalism is a position in the philosophy of mind on the mind–body problem. It holds that subjective mental events are completely dependent for their existence on corresponding physical and biochemical events within the human body, but do not themselves influence physical events. According to epiphenomenalism, the appearance that subjective mental states (such as intentions) influence physical events is an illusion, with consciousness being a by-product of physical states of the world. For instance, fear seems to make the heart beat faster, but according to epiphenomenalism the biochemical secretions of the brain and nervous system (such as adrenaline)—not the experience of fear—is what raises the heartbeat. Because mental events are a kind of overflow that cannot cause anything physical, yet have non-physical properties, epiphenomenalism is viewed as a form of property dualism.
== Development ==
During the 17th century, René Descartes argued that animals are subject to mechanical laws of nature. He defended the idea of automatic behavior, or the performance of actions without conscious thought. Descartes questioned how the immaterial mind and the material body can interact causally. His interactionist model (1649) held that the body relates to the mind through the pineal gland. La Mettrie, Leibniz, and Spinoza all in their own way began this way of thinking. The idea that even if the animal were conscious nothing would be added to the production of behavior, even in animals of the human type, was first voiced by La Mettrie (1745), and then by Cabanis (1802), and was further explicated by Hodgson (1870) and Thomas Henry Huxley (1874).
Thomas Henry Huxley agreed with Descartes that behavior is determined solely by physical mechanisms, but he also believed that humans enjoy an intelligent life. In 1874, Huxley argued, in the Presidential Address to the British Association for the Advancement of Science, that animals are conscious automata. Huxley proposed that psychical changes are collateral products of physical changes. Like the bell of a clock that has no role in keeping the time, consciousness has no role in determining behavior.
Huxley defended automatism by testing reflex actions, originally supported by Descartes. Huxley hypothesized that frogs that undergo lobotomy would swim when thrown into water, despite being unable to initiate actions. He argued that the ability to swim was solely dependent on the molecular change in the brain, concluding that consciousness is not necessary for reflex actions. According to epiphenomenalism, animals experience pain only as a result of neurophysiology.
In 1870, Huxley conducted a case study on a French soldier who had sustained a shot in the Franco-Prussian war that fractured his left parietal bone. Every few weeks the soldier would enter a trance-like state, smoking, dressing himself, and aiming his cane like a rifle all while being insensitive to pins, electric shocks, odorous substances, vinegar, noise, and certain light conditions. Huxley used this study to show that consciousness was not necessary to execute these purposeful actions, justifying the assumption that humans are insensible machines. Huxley's mechanistic attitude towards the body convinced him that the brain alone causes behavior.
In the early 1900s, scientific behaviorists such as Ivan Pavlov, John B. Watson, and B. F. Skinner began the attempt to uncover laws describing the relationship between stimuli and responses, without reference to inner mental phenomena. Instead of adopting a form of eliminativism or mental fictionalism, positions that deny that inner mental phenomena exist, a behaviorist was able to adopt epiphenomenalism in order to allow for the existence of mind. George Santayana (1905) believed that all motion has physical causes. Because consciousness is accessory to life and not essential to it, natural selection is responsible for ingraining tendencies to avoid certain contingencies without any conscious achievement involved. By the 1960s, scientific behaviorism met substantial difficulties and eventually gave way to the cognitive revolution. Participants in that revolution, such as Jerry Fodor, reject epiphenomenalism and insist upon the efficacy of the mind. Fodor even speaks of "epiphobia"—fear that one is becoming an epiphenomenalist.
However, since the cognitive revolution, there have been several who have argued for a version of epiphenomenalism. In 1970, Keith Campbell proposed his "new epiphenomenalism", which states that the body produces a spiritual mind that does not act on the body. How the brain causes a spiritual mind, according to Campbell, is destined to remain beyond our understanding forever. In 2001, David Chalmers and Frank Jackson argued that claims about conscious states should be deduced a priori from claims about physical states alone. They offered that epiphenomenalism bridges, but does not close, the explanatory gap between the physical and the phenomenal realms. These more recent versions maintain that only the subjective, qualitative aspects of mental states are epiphenomenal. Imagine both Pierre and a robot eating a cupcake. Unlike the robot, Pierre is conscious of eating the cupcake while the behavior is under way. This subjective experience is often called a quale (plural qualia), and it describes the private "raw feel" or the subjective "what-it-is-like" that is the inner accompaniment of many mental states. Thus, while Pierre and the robot are both doing the same thing, only Pierre has the inner conscious experience.
Frank Jackson (1982), for example, once espoused the following view:
I am what is sometimes known as a "qualia freak". I think that there are certain features of bodily sensations especially, but also of certain perceptual experiences, which no amount of purely physical information includes. Tell me everything physical there is to tell about what is going on in a living brain... you won't have told me about the hurtfulness of pains, the itchiness of itches, pangs of jealousy....
Some thinkers draw distinctions between different varieties of epiphenomenalism. In Consciousness Explained, Daniel Dennett distinguishes between a purely metaphysical sense of epiphenomenalism, in which the epiphenomenon has no causal impact at all, and Huxley's "steam whistle" epiphenomenalism, in which effects exist but are not functionally relevant.
== Arguments for ==
Some neurophysiological data has been proffered in support of epiphenomenalism. Some of the oldest such data is the Bereitschaftspotential or "readiness potential" in which electrical activity related to voluntary actions can be recorded up to two seconds before the subject is aware of making a decision to perform the action. More recently Benjamin Libet et al. (1979) have shown that it can take 0.5 seconds before a stimulus becomes part of conscious experience even though subjects can respond to the stimulus in reaction time tests within 200 milliseconds. The methods and conclusions of this experiment have received much criticism (e.g., see the many critical commentaries in Libet's (1985) target article), including fairly recently by neuroscientists such as Peter Tse, who claim to show that the readiness potential has nothing to do with consciousness at all.
== Arguments against ==
The most powerful argument against epiphenomenalism is that it is self-contradictory: if we have knowledge about epiphenomenalism, then our brains know about the existence of the mind, but if epiphenomenalism were correct, then our brains should not have any knowledge about the mind, because the mind does not affect anything physical.
However, some philosophers do not accept this as a rigorous refutation. For example, philosopher Victor Argonov states that epiphenomenalism is a questionable, but experimentally falsifiable theory. He argues that the personal mind is not the only source of knowledge about the existence of mind in the world. A creature (even a philosophical zombie) could have knowledge about the mind and the mind-body problem by virtue of some innate knowledge. The information about the mind (and its problematic properties such as qualia and the hard problem of consciousness) could have been, in principle, implicitly "written" in the material world since its creation. Epiphenomenalists can say that God created an immaterial mind and a detailed "program" of material human behavior that makes it possible to speak about the mind–body problem. That version of epiphenomenalism seems highly exotic, but it cannot be excluded from consideration by pure theory. However, Argonov suggests that experiments could refute epiphenomenalism. In particular, epiphenomenalism could be refuted if neural correlates of consciousness can be found in the human brain, and it is proven that human speech about consciousness is caused by them.
Some philosophers, such as Daniel Dennett, reject both epiphenomenalism and the existence of qualia with the same charge that Gilbert Ryle leveled against a Cartesian "ghost in the machine", that they too are category mistakes. A quale or conscious experience would not belong to the category of objects of reference on this account, but rather to the category of ways of doing things.
Functionalists assert that mental states are well described by their overall role, their activity in relation to the organism as a whole. "This doctrine is rooted in Aristotle's conception of the soul, and has antecedents in Hobbes's conception of the mind as a 'calculating machine', but it has become fully articulated (and popularly endorsed) only in the last third of the 20th century." In so far as it mediates stimulus and response, a mental function is analogous to a program that processes input/output in automata theory. In principle, multiple realisability would guarantee platform dependencies can be avoided, whether in terms of hardware and operating system or, ex hypothesi, biology and philosophy. Because a high-level language is a practical requirement for developing the most complex programs, functionalism implies that a non-reductive physicalism would offer a similar advantage over a strictly eliminative materialism.
Eliminative materialists believe "folk psychology" is so unscientific that, ultimately, it will be better to eliminate primitive concepts such as mind, desire and belief, in favor of a future neuroscientific account. A more moderate position such as J. L. Mackie's error theory suggests that false beliefs should be stripped away from a mental concept without eliminating the concept itself, the legitimate core meaning being left intact.
Benjamin Libet's results are quoted in favor of epiphenomenalism, but he believes subjects still have a "conscious veto", since the readiness potential does not invariably lead to an action. In Freedom Evolves, Daniel Dennett argues that a no-free-will conclusion is based on dubious assumptions about the location of consciousness, as well as questioning the accuracy and interpretation of Libet's results. Similar criticism of Libet-style research has been made by neuroscientist Adina Roskies and cognitive theorists Tim Bayne and Alfred Mele.
Others have argued that data such as the Bereitschaftspotential undermine epiphenomenalism for the same reason, that such experiments rely on a subject reporting the point in time at which a conscious experience and a conscious decision occurs, thus relying on the subject to be able to consciously perform an action. That ability would seem to be at odds with early epiphenomenalism, which according to Huxley is the broad claim that consciousness is "completely without any power… as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery". Mind–body dualists reject epiphenomenalism on the same grounds.
Adrian G. Guggisberg and Annaïs Mottaz have also challenged those findings.
A study by Aaron Schurger and colleagues published in PNAS challenged assumptions about the causal nature of the readiness potential itself (and the "pre-movement buildup" of neural activity in general), thus denying the conclusions drawn from studies such as Libet's and Fried's.
In favor of interactionism, Celia Green (2003) argues that epiphenomenalism does not even provide a satisfactory solution to the problem of interaction posed by substance dualism. Although it does not entail substance dualism, according to Green, epiphenomenalism implies a one-way form of interactionism that is just as hard to conceive of as the two-way form embodied in substance dualism. Green suggests the assumption that it is less of a problem may arise from the unexamined belief that physical events have some sort of primacy over mental ones.
A number of scientists and philosophers, including William James, Karl Popper, John C. Eccles and Donald Symons, dismiss epiphenomenalism from an evolutionary perspective. They point out that the view that mind is an epiphenomenon of brain activity is not consistent with evolutionary theory, because if mind were functionless, it would have disappeared long ago, as it would not have been favoured by evolution.
== See also ==
== Notes ==
== Further reading ==
Chalmers, David. (1996) The Conscious Mind: In Search of a Fundamental Theory, Oxford: Oxford University Press.
Green, Celia. (2003) The Lost Cause: Causation and the Mind-Body Problem, Oxford: Oxford Forum. Online text
Jackson, Frank. (1982) "Epiphenomenal Qualia", The Philosophical Quarterly, 32, pp. 127–136. Online text
James, William. (1890) The Principles of Psychology, Henry Holt And Company. Online text
Libet, Benjamin; Wright, E. W.; Feinstein, B.; Pearl, D. K. (1979). "Subjective Referral of the Timing for a Conscious Sensory Experience". Brain. 102 (1): 191–221. doi:10.1093/brain/102.1.193. PMID 427530.
Libet, Benjamin (1985). "Unconscious Cerebral Initiative and the Role of Conscious Will in Voluntary Action". Behavioral and Brain Sciences. 8 (4): 529–566. doi:10.1017/s0140525x00044903. S2CID 6965339.
Robinson, William (2019) Epiphenomenal Mind: An Integrated Outlook on Sensations, Beliefs, and Pleasure, New York and London: Routledge.
== External links ==
Epiphenomenalism Explained, an article by Norman Bacrac in Philosophy Now | Wikipedia/Epiphenomenalism |
Action theory or theory of action is an area in philosophy concerned with theories about the processes causing willful human bodily movements of a more or less complex kind. This area of thought involves epistemology, ethics, metaphysics, jurisprudence, and philosophy of mind, and has attracted the strong interest of philosophers ever since Aristotle's Nicomachean Ethics (Third Book). With the advent of psychology and later neuroscience, many theories of action are now subject to empirical testing.
Philosophical action theory, or the philosophy of action, should not be confused with sociological theories of social action, such as the action theory established by Talcott Parsons. Nor should it be confused with activity theory.
== Overview ==
Basic action theory typically describes action as intentional behavior caused by an agent in a particular situation. The agent's desires and beliefs (e.g. a person wanting a glass of water and believing that the clear liquid in the cup in front of them is water) lead to bodily behavior (e.g. reaching across for the glass). In the simple theory (see Donald Davidson), the desire and belief jointly cause the action. Michael Bratman has raised problems for such a view and argued that we should take the concept of intention as basic and not analyzable into beliefs and desires.
Aristotle held that a thorough explanation must give an account of both the efficient cause, the agent, and the final cause, the intention.
In some theories a desire plus a belief about the means of satisfying that desire are always what is behind an action. Agents aim, in acting, to maximize the satisfaction of their desires. Such a theory of prospective rationality underlies much of economics and other social sciences within the more sophisticated framework of rational choice. However, many theories of action argue that rationality extends far beyond calculating the best means to achieve one's ends. For instance, a belief that I ought to do X, in some theories, can directly cause me to do X without my having to want to do X (i.e. have a desire to do X). Rationality, in such theories, also involves responding correctly to the reasons an agent perceives, not just acting on wants.
While action theorists generally employ the language of causality in their theories of what the nature of action is, the issue of what causal determination comes to has been central to controversies about the nature of free will.
Conceptual discussions also revolve around a precise definition of action in philosophy. Scholars may disagree on which bodily movements fall under this category, e.g. whether thinking should be analysed as action, and how complex actions involving several steps to be taken and diverse intended consequences are to be summarised or decomposed.
== See also ==
Praxeology
Free will
Humeanism § Theory of action
Cybernetics
== References ==
== Further reading ==
Maurice Blondel (1893). L'Action - Essai d'une critique de la vie et d'une science de la pratique
G. E. M. Anscombe (1957). Intention, Basil Blackwell, Oxford.
James Sommerville (1968). Total Commitment, Blondel's L'Action, Corpus Books.
Michel Crozier, & Erhard Friedberg (1980). Actors and Systems Chicago: [University of Chicago Press].
Donald Davidson (1980). Essays on Actions and Events, Clarendon Press, Oxford.
Jonathan Dancy & Constantine Sandis (eds.) (2015). Philosophy of Action: An Anthology, Wiley-Blackwell, Oxford.
Jennifer Hornsby (1980). Actions, Routledge, London.
Lilian O'Brien (2014). Philosophy of Action, Palgrave, Basingstoke.
Christine Korsgaard (2008). The Constitution of Agency, Oxford University Press, Oxford.
Alfred R. Mele (ed.) (1997). The Philosophy of Action, Oxford University Press, Oxford.
John Hyman & Helen Steward (eds.) (2004). Agency and Action, Cambridge University Press, Cambridge.
Anton Leist (ed.) (2007). Action in Context, Walter de Gruyter, Berlin.
Timothy O'Connor & Constantine Sandis (eds.) (2010). A Companion to the Philosophy of Action, Wiley-Blackwell, Oxford.
Sarah Paul (2020). The Philosophy of Action: A Contemporary Introduction, London, Routledge.
Peter Šajda et al. (eds.) (2012). Affectivity, Agency and Intersubjectivity, L'Harmattan, Paris.
Constantine Sandis (ed.) (2009). New Essays on the Explanation of Action, Palgrave Macmillan, Basingstoke.
Constantine Sandis (ed.) (2019). Philosophy of Action from Suarez to Anscombe, London, Routledge.
Michael Thompson (2012). Life and Action: Elementary Structures of Practice and Practical Thought, Boston, MA, Harvard University Press.
Lawrence H. Davis (1979). Theory of Action, Prentice-Hall, (Foundations of Philosophy Series), Englewood Cliffs, NJ.
== External links ==
Zalta, Edward N. (ed.). "Action". Stanford Encyclopedia of Philosophy.
"Thomas Reid's Theory of Action". Internet Encyclopedia of Philosophy. | Wikipedia/Action_theory_(philosophy) |
Libertarianism is one of the main philosophical positions related to the problems of free will and determinism which are part of the larger domain of metaphysics. In particular, libertarianism is an incompatibilist position which argues that free will is logically incompatible with a deterministic universe. Libertarianism states that since agents have free will, determinism must be false.
One of the first clear formulations of libertarianism is found in John Duns Scotus. In a theological context, metaphysical libertarianism was notably defended by Jesuit authors like Luis de Molina and Francisco Suárez against the rather compatibilist Thomist Bañecianism. Other important metaphysical libertarians in the early modern period were René Descartes, George Berkeley, Immanuel Kant and Thomas Reid.
Roderick Chisholm was a prominent defender of libertarianism in the 20th century and contemporary libertarians include Robert Kane, Geert Keil, Peter van Inwagen and Robert Nozick.
== Overview ==
The first recorded use of the term libertarianism was in 1789 by William Belsham in a discussion of free will and in opposition to necessitarian or determinist views.
Metaphysical libertarianism is one philosophical viewpoint under that of incompatibilism. Libertarianism holds onto a concept of free will that requires the agent to be able to take more than one possible course of action under a given set of circumstances.
Accounts of libertarianism subdivide into non-physical theories and physical or naturalistic theories. Non-physical theories hold that the events in the brain that lead to the performance of actions do not have an entirely physical explanation, and consequently the world is not closed under physics. Such interactionist dualists believe that some non-physical mind, will, or soul overrides physical causality.
Explanations of libertarianism that do not involve dispensing with physicalism require physical indeterminism, such as probabilistic subatomic particle behavior—a theory unknown to many of the early writers on free will. Physical determinism, under the assumption of physicalism, implies there is only one possible future and is therefore not compatible with libertarian free will. Some libertarian explanations involve invoking panpsychism, the theory that a quality of mind is associated with all particles, and pervades the entire universe, in both animate and inanimate entities. Other approaches do not require free will to be a fundamental constituent of the universe; ordinary randomness is appealed to as supplying the "elbow room" believed to be necessary by libertarians.
Free volition is regarded as a particular kind of complex, high-level process with an element of indeterminism. An example of this kind of approach has been developed by Robert Kane, where he hypothesizes that,
In each case, the indeterminism is functioning as a hindrance or obstacle to her realizing one of her purposes—a hindrance or obstacle in the form of resistance within her will which has to be overcome by effort.
Although at the time quantum mechanics (and physical indeterminism) was only in the initial stages of acceptance, in his book Miracles: A preliminary study C. S. Lewis stated the logical possibility that if the physical world were proved indeterministic this would provide an entry point to describe an action of a non-physical entity on physical reality. Indeterministic physical models (particularly those involving quantum indeterminacy) introduce random occurrences at an atomic or subatomic level. These events might affect brain activity, and could seemingly allow incompatibilist free will if the apparent indeterminacy of some mental processes (for instance, subjective perceptions of control in conscious volition) maps to the underlying indeterminacy of the physical construct. This relationship, however, requires a causative role over probabilities that is questionable, and it is far from established that brain activity responsible for human action can be affected by such events. Secondarily, these incompatibilist models are dependent upon the relationship between action and conscious volition, as studied in the neuroscience of free will. It is evident that observation may disturb the outcome of the observation itself, rendering limited our ability to identify causality. Niels Bohr, one of the main architects of quantum theory, suggested, however, that no connection could be made between indeterminism of nature and freedom of will.
== Agent-causal theories ==
In non-physical theories of free will, agents are assumed to have power to intervene in the physical world, a view known as agent causation. Proponents of agent causation include George Berkeley, Thomas Reid, and Roderick Chisholm.
Most events can be explained as the effects of prior events. When a tree falls, it does so because of the force of the wind, its own structural weakness, and so on. However, when a person performs a free act, agent causation theorists say that the action was not caused by any other events or states of affairs, but rather was caused by the agent. Agent causation is ontologically separate from event causation. The action was not uncaused, because the agent caused it. But the agent's causing it was not determined by the agent's character, desires, or past, since that would just be event causation. As Chisholm explains it, humans have "a prerogative which some would attribute only to God: each of us, when we act, is a prime mover unmoved. In doing what we do, we cause certain events to happen, and nothing—or no one—causes us to cause those events to happen."
This theory involves a difficulty which has long been associated with the idea of an unmoved mover. If a free action was not caused by any event, such as a change in the agent or an act of the will, then what is the difference between saying that an agent caused the event and simply saying that the event happened on its own? As William James put it, "If a 'free' act be a sheer novelty, that comes not from me, the previous me, but ex nihilo, and simply tacks itself on to me, how can I, the previous I, be responsible? How can I have any permanent character that will stand still long enough for praise or blame to be awarded?"
Agent causation advocates respond that agent causation is actually more intuitive than event causation. They point to David Hume's argument that when we see two events happen in succession, our belief that one event caused the other cannot be justified rationally (known as the problem of induction). If that is so, where does our belief in causality come from? According to Thomas Reid, "the conception of an efficient cause may very probably be derived from the experience we have had ... of our own power to produce certain effects." Our everyday experiences of agent causation provide the basis for the idea of event causation.
== Event-causal theories ==
Event-causal accounts of incompatibilist free will typically rely upon physicalist models of mind (like those of the compatibilist), yet they presuppose physical indeterminism, in which certain indeterministic events are said to be caused by the agent. A number of event-causal accounts of free will have been created, referenced here as deliberative indeterminism, centred accounts, and efforts of will theory. The first two accounts do not require free will to be a fundamental constituent of the universe. Ordinary randomness is appealed to as supplying the "elbow room" that libertarians believe necessary. A first common objection to event-causal accounts is that the indeterminism could be destructive and could therefore diminish control by the agent rather than provide it (related to the problem of origination). A second common objection to these models is that it is questionable whether such indeterminism could add any value to deliberation over that which is already present in a deterministic world.
Deliberative indeterminism asserts that the indeterminism is confined to an earlier stage in the decision process. This is intended to provide an indeterminate set of possibilities to choose from, while not risking the introduction of luck (random decision making). The selection process is deterministic, although it may be based on earlier preferences established by the same process. Deliberative indeterminism has been referenced by Daniel Dennett and John Martin Fischer. An obvious objection to such a view is that an agent cannot be assigned ownership over their decisions (or preferences used to make those decisions) to any greater degree than that of a compatibilist model.
Centred accounts propose that for any given decision between two possibilities, the strength of reason will be considered for each option, yet there is still a probability the weaker candidate will be chosen. An obvious objection to such a view is that decisions are explicitly left up to chance, and origination or responsibility cannot be assigned for any given decision.
Efforts of will theory is related to the role of will power in decision making. It suggests that the indeterminacy of agent volition processes could map to the indeterminacy of certain physical events—and the outcomes of these events could therefore be considered caused by the agent. Models of volition have been constructed in which it is seen as a particular kind of complex, high-level process with an element of physical indeterminism. An example of this approach is that of Robert Kane, where he hypothesizes that "in each case, the indeterminism is functioning as a hindrance or obstacle to her realizing one of her purposes—a hindrance or obstacle in the form of resistance within her will which must be overcome by effort." According to Robert Kane such "ultimate responsibility" is a required condition for free will. An important factor in such a theory is that the agent cannot be reduced to physical neuronal events, but rather mental processes are said to provide an equally valid account of the determination of outcome as their physical processes (see non-reductive physicalism).
=== Epicurus ===
Epicurus, an ancient Hellenistic philosopher, argued that as atoms moved through the void, there were occasions when they would "swerve" (clinamen) from their otherwise determined paths, thus initiating new causal chains. Epicurus argued that these swerves would allow us to be more responsible for our actions, something impossible if every action was deterministically caused.
Epicurus did not say the swerve was directly involved in decisions. But following Aristotle, Epicurus thought human agents have the autonomous ability to transcend necessity and chance (both of which destroy responsibility), so that praise and blame are appropriate. Epicurus finds a tertium quid, beyond necessity and beyond chance. His tertium quid is agent autonomy, what is "up to us."
[S]ome things happen of necessity (ἀνάγκη), others by chance (τύχη), others through our own agency (παρ' ἡμᾶς). [...]. [N]ecessity destroys responsibility and chance is inconstant; whereas our own actions are autonomous, and it is to them that praise and blame naturally attach.
The Epicurean philosopher Lucretius (1st century BC) saw the randomness as enabling free will, even if he could not explain exactly how, beyond the fact that random swerves would break the causal chain of determinism.
Again, if all motion is always one long chain, and new motion arises out of the old in order invariable, and if the first-beginnings do not make by swerving a beginning of motion such as to break the decrees of fate, that cause may not follow cause from infinity, whence comes this freedom (libera) in living creatures all over the earth, whence I say is this will (voluntas) wrested from the fates by which we proceed whither pleasure leads each, swerving also our motions not at fixed times and fixed places, but just where our mind has taken us? For undoubtedly it is his own will in each that begins these things, and from the will movements go rippling through the limbs.
However, the interpretation of these ancient philosophers is controversial. Tim O'Keefe has argued that Epicurus and Lucretius were not libertarians at all, but compatibilists.
=== Robert Nozick ===
Robert Nozick put forward an indeterministic theory of free will in Philosophical Explanations (1981).
When human beings become agents through reflexive self-awareness, they express their agency by having reasons for acting, to which they assign weights. Choosing the dimensions of one's identity is a special case, in which the assigning of weight to a dimension is partly self-constitutive. But all acting for reasons is constitutive of the self in a broader sense, namely, by its shaping one's character and personality in a manner analogous to the shaping that law undergoes through the precedent set by earlier court decisions. Just as a judge does not merely apply the law but to some degree makes it through judicial discretion, so too a person does not merely discover weights but assigns them; one not only weighs reasons but also weights them. Set in train is a process of building a framework for future decisions that we are tentatively committed to.
The lifelong process of self-definition in this broader sense is construed indeterministically by Nozick. The weighting is "up to us" in the sense that it is undetermined by antecedent causal factors, even though subsequent action is fully caused by the reasons one has accepted. He compares assigning weights in this deterministic sense to "the currently orthodox interpretation of quantum mechanics", following von Neumann in understanding a quantum mechanical system as in a superposition or probability mixture of states, which changes continuously in accordance with quantum mechanical equations of motion and discontinuously via measurement or observation that "collapses the wave packet" from a superposition to a particular state. Analogously, a person before decision has reasons without fixed weights: he is in a superposition of weights. The process of decision reduces the superposition to a particular state that causes action.
=== Robert Kane ===
One particularly influential contemporary theory of libertarian free will is that of Robert Kane. Kane argued that "(1) the existence of alternative possibilities (or the agent's power to do otherwise) is a necessary condition for acting freely, and that (2) determinism is not compatible with alternative possibilities (it precludes the power to do otherwise)". The crux of Kane's position is grounded not in a defense of alternative possibilities (AP) but in the notion of what Kane refers to as ultimate responsibility (UR). Thus, AP is a necessary but insufficient criterion for free will. It is necessary that there be (metaphysically) real alternatives for our actions, but that is not enough; our actions could be random without being in our control. The control is found in "ultimate responsibility".
Ultimate responsibility entails that agents must be the ultimate creators (or originators) and sustainers of their own ends and purposes. There must be more than one way for a person's life to turn out (AP). More importantly, whichever way it turns out must be based in the person's willing actions. Kane defines it as follows:
(UR) An agent is ultimately responsible for some (event or state) E's occurring only if (R) the agent is personally responsible for E's occurring in a sense which entails that something the agent voluntarily (or willingly) did or omitted either was, or causally contributed to, E's occurrence and made a difference to whether or not E occurred; and (U) for every X and Y (where X and Y represent occurrences of events and/or states) if the agent is personally responsible for X and if Y is an arche (sufficient condition, cause or motive) for X, then the agent must also be personally responsible for Y.
In short, "an agent must be responsible for anything that is a sufficient reason (condition, cause or motive) for the action's occurring."
What allows for ultimacy of creation in Kane's picture are what he refers to as "self-forming actions" or SFAs—those moments of indecision during which people experience conflicting wills. These SFAs are the undetermined, regress-stopping voluntary actions or refraining in the life histories of agents that are required for UR. UR does not require that every act done of our own free will be undetermined and thus that, for every act or choice, we could have done otherwise; it requires only that certain of our choices and actions be undetermined (and thus that we could have done otherwise), namely SFAs. These form our character or nature; they inform our future choices, reasons and motivations in action. If a person has had the opportunity to make a character-forming decision (SFA), they are responsible for the actions that are a result of their character.
==== Critique ====
Randolph Clarke objects that Kane's depiction of free will is not truly libertarian but rather a form of compatibilism. The objection asserts that although the outcome of an SFA is not determined, one's history up to the event is; so the fact that an SFA will occur is also determined. The outcome of the SFA is based on chance, and from that point on one's life is determined. This kind of freedom, says Clarke, is no different from the kind of freedom argued for by compatibilists, who assert that even though our actions are determined, they are free because they are in accordance with our own wills, much like the outcome of an SFA.
Kane responds that the difference between causal indeterminism and compatibilism is "ultimate control—the originative control exercised by agents when it is 'up to them' which of a set of possible choices or actions will now occur, and up to no one and nothing else over which the agents themselves do not also have control". UR assures that the sufficient conditions for one's actions do not lie before one's own birth.
Galen Strawson holds that there is a fundamental sense in which free will is impossible, whether determinism is true or not. He argues for this position with what he calls his "basic argument", which aims to show that no-one is ever ultimately morally responsible for their actions, and hence that no one has free will in the sense that usually concerns us.
In his book defending compatibilism, Freedom Evolves, Daniel Dennett spends a chapter criticising Kane's theory. Kane believes freedom is based on certain rare and exceptional events, which he calls self-forming actions or SFAs. Dennett notes that there is no guarantee such an event will occur in an individual's life. If it does not, the individual does not in fact have free will at all, according to Kane. Yet they will seem the same as anyone else. Dennett finds an essentially indetectable notion of free will to be incredible.
== Criticism of Libertarianism ==
Metaphysical libertarianism has faced significant criticism from both scientific and philosophical perspectives.
One major objection comes from neuroscience. Experiments by Benjamin Libet and others suggest that the brain may initiate decisions before subjects become consciously aware of them, raising questions about whether conscious free will exists at all. Critics argue this challenges the libertarian notion of uncaused or agent-caused actions.
Another prominent critique is the "luck objection." This argument claims that if an action is not determined by prior causes, then it seems to happen by chance. In this view, libertarian freedom risks reducing choice to randomness, undermining meaningful moral responsibility.
Compatibilists, such as Daniel Dennett, argue that free will is compatible with determinism and that libertarianism wrongly assumes that causal determinism automatically negates responsibility. They maintain that what matters is whether a person's actions stem from their internal motivations—not whether those actions are ultimately uncaused.
Some philosophers also raise metaphysical concerns about agent-causation, arguing that positing the agent as a "first cause" introduces mysterious or incoherent forms of causation into an otherwise naturalistic worldview.
== See also ==
== References ==
== Further reading ==
Clarke, Randolph (2003). Libertarian Accounts of Free Will. New York: Oxford University Press. ISBN 0-19-515987-X.
Kane, Robert (1998). The Significance of Free Will. New York: Oxford University Press. ISBN 0-19-512656-4.
== External links ==
Stanford Encyclopedia of Philosophy
Peter Van Inwagen: The Mystery of Metaphysical Freedom
Collection of papers on various aspects of Free Will and Determinism hosted by Ted Honderich
Information Philosopher
MindPapers Collection of articles on Libertarianism about Free Will | Wikipedia/Libertarianism_(metaphysics) |
In the field of artificial intelligence, an inference engine is a software component of an intelligent system that applies logical rules to the knowledge base to deduce new information. The first inference engines were components of expert systems. The typical expert system consisted of a knowledge base and an inference engine. The knowledge base stored facts about the world. The inference engine applied logical rules to the knowledge base and deduced new knowledge. This process would iterate as each new fact in the knowledge base could trigger additional rules in the inference engine. Inference engines work primarily in one of two modes either special rule or facts: forward chaining and backward chaining. Forward chaining starts with the known facts and asserts new facts. Backward chaining starts with goals, and works backward to determine what facts must be asserted so that the goals can be achieved.
Additionally, the concept of 'inference' has expanded to include the process through which trained neural networks generate predictions or decisions. In this context, an 'inference engine' could refer to the specific part of the system, or even the hardware, that executes these operations. This type of inference plays a crucial role in various applications, including (but not limited to) image recognition, natural language processing, and autonomous vehicles. The inference phase in these applications is typically characterized by a high volume of data inputs and real-time processing requirements.
== Architecture ==
The logic that an inference engine uses is typically represented as IF-THEN rules. The general format of such rules is IF <logical expression> THEN <logical expression>. Prior to the development of expert systems and inference engines, artificial intelligence researchers focused on more powerful theorem prover environments that offered much fuller implementations of first-order logic. For example, general statements that included universal quantification (for all X some statement is true) and existential quantification (there exists some X such that some statement is true). What researchers discovered is that the power of these theorem-proving environments was also their drawback. Back in 1965, it was far too easy to create logical expressions that could take an indeterminate or even infinite time to terminate. For example, it is common in universal quantification to make statements over an infinite set such as the set of all natural numbers. Such statements are perfectly reasonable and even required in mathematical proofs but when included in an automated theorem prover executing on a computer may cause the computer to fall into an infinite loop. Focusing on IF-THEN statements (what logicians call modus ponens) still gave developers a very powerful general mechanism to represent logic, but one that could be used efficiently with computational resources. What is more, there is some psychological research that indicates humans also tend to favor IF-THEN representations when storing complex knowledge.
A simple example of modus ponens often used in introductory logic books is "If you are human then you are mortal". This can be represented in pseudocode as:
Rule1: Human(x) => Mortal(x)
A trivial example of how this rule would be used in an inference engine is as follows. In forward chaining, the inference engine would find any facts in the knowledge base that matched Human(x) and for each fact it found would add the new information Mortal(x) to the knowledge base. So if it found an object called Socrates that was human it would deduce that Socrates was mortal. In backward chaining, the system would be given a goal, e.g. answer the question is Socrates mortal? It would search through the knowledge base and determine if Socrates was human and, if so, would assert he is also mortal. However, in backward chaining a common technique was to integrate the inference engine with a user interface. In that way, rather than simply being automated the system could now be interactive. In this trivial example, if the system was given the goal to answer the question if Socrates was mortal and it didn't yet know if he was human, it would generate a window to ask the user the question "Is Socrates human?" and would then use that information accordingly.
This innovation of integrating the inference engine with a user interface led to the second early advancement of expert systems: explanation capabilities. The explicit representation of knowledge as rules rather than code made it possible to generate explanations to users: both explanations in real time and after the fact. So if the system asked the user "Is Socrates human?", the user may wonder why she was being asked that question and the system would use the chain of rules to explain why it was currently trying to ascertain that bit of knowledge: that is, it needs to determine if Socrates is mortal and to do that needs to determine if he is human. At first these explanations were not much different than the standard debugging information that developers deal with when debugging any system. However, an active area of research was utilizing natural language technology to ask, understand, and generate questions and explanations using natural languages rather than computer formalisms.
An inference engine cycles through three sequential steps: match rules, select rules, and execute rules. The execution of the rules will often result in new facts or goals being added to the knowledge base, which will trigger the cycle to repeat. This cycle continues until no new rules can be matched.
In the first step, match rules, the inference engine finds all of the rules that are triggered by the current contents of the knowledge base. In forward chaining, the engine looks for rules where the antecedent (left hand side) matches some fact in the knowledge base. In backward chaining, the engine looks for antecedents that can satisfy one of the current goals.
In the second step, select rules, the inference engine prioritizes the various rules that were matched to determine the order to execute them. In the final step, execute rules, the engine executes each matched rule in the order determined in step two and then iterates back to step one again. The cycle continues until no new rules are matched.
== Implementations ==
Early inference engines focused primarily on forward chaining. These systems were usually implemented in the Lisp programming language. Lisp was a frequent platform for early AI research due to its strong capability to do symbolic manipulation. Also, as an interpreted language it offered productive development environments appropriate to debugging complex programs. A necessary consequence of these benefits was that Lisp programs tended to be slower and less robust than compiled languages of the time such as C. A common approach in these early days was to take an expert system application and repackage the inference engine used for that system as a re-usable tool other researchers could use for the development of other expert systems. For example, MYCIN was an early expert system for medical diagnosis and EMYCIN was an inference engine extrapolated from MYCIN and made available for other researchers.
As expert systems moved from research prototypes to deployed systems there was more focus on issues such as speed and robustness. One of the first and most popular forward chaining engines was OPS5, which used the Rete algorithm to optimize the efficiency of rule firing. Another very popular technology that was developed was the Prolog logic programming language. Prolog focused primarily on backward chaining and also featured various commercial versions and optimizations for efficiency and robustness.
As expert systems prompted significant interest from the business world, various companies, many of them started or guided by prominent AI researchers created productized versions of inference engines. For example, Intellicorp was initially guided by Edward Feigenbaum. These inference engine products were also often developed in Lisp at first. However, demands for more affordable and commercially viable platforms eventually made personal computer platforms very popular.
=== Open source implementations ===
ClipsRules and RefPerSys (inspired by CAIA and the work of Jacques Pitrat). The Frama-C static source code analyzer also uses some inference engine techniques.
== See also ==
Geometric and Topological Inference
Action selection
Backward chaining
Expert system
Forward chaining
Inductive inference
== References == | Wikipedia/Inference_engine |
In the philosophy of mind, functionalism is the thesis that each and every mental state (for example, the state of having a belief, of having a desire, or of being in pain) is constituted solely by its functional role, which means its causal relation to other mental states, sensory inputs, and behavioral outputs. Functionalism developed largely as an alternative to the identity theory of mind and behaviorism.
Functionalism is a theoretical level between the physical implementation and behavioral output. Therefore, it is different from its predecessors of Cartesian dualism (advocating independent mental and physical substances) and Skinnerian behaviorism and physicalism (declaring only physical substances) because it is only concerned with the effective functions of the brain, through its organization or its "software programs".
Since a mental state is identified by a functional role, it is said to be realized on multiple levels; in other words, it is able to be manifested in various systems, even perhaps computers, so long as the system performs the appropriate functions. While a computer's program performs the functions via computations on inputs to give outputs, implemented via its electronic substrate, a brain performs the functions via its biological operation and stimulus responses.
== Multiple realizability ==
An important part of some arguments for functionalism is the idea of multiple realizability. According to standard functionalist theories, a mental state corresponds to a functional role. It is like a valve; a valve can be made of plastic or metal or other material, as long as it performs the proper function (controlling the flow of a liquid or gas). Similarly, functionalists argue, a mental state can be explained without considering the state of the underlying physical medium (such as the brain) that realizes it; one only needs to consider higher-level functions. Because a mental state is not limited to a particular medium, it can be realized in multiple ways, including, theoretically, with non-biological systems, such as computers. A silicon-based machine could have the same sort of mental life that a human being has, provided that its structure realized the proper functional roles.
While most functionalist theories accept the multiple realizability of mental states, some functionalist theories, such as the Functional Specification Theories (FSTs), reject this view. FSTs were most notably developed by David Lewis and David Malet Armstrong. According to FSTs, mental states are the particular "realizers" of the functional role, not the functional role itself. The mental state of belief, for example, just is whatever brain or neurological process that realizes the appropriate belief function. Thus, unlike standard versions of functionalism (often called Functional State Identity Theories), FSTs do not allow for the multiple realizability of mental states, because the fact that mental states are realized by brain states is essential. What often drives this view is the belief that if we were to encounter an alien race with a cognitive system composed of significantly different material from humans' (e.g., silicon-based) but performed the same functions as human mental states (for example, they tend to yell "Ouch!" when poked with sharp objects), we would say that their type of mental state might be similar to ours but it is not the same. For some, this may be a disadvantage to FSTs. Indeed, one of Hilary Putnam's arguments for his version of functionalism relied on the intuition that such alien creatures would have the same mental states as humans do, and that the multiple realizability of standard functionalism makes it a better theory of mind.
== Types ==
=== Machine-state functionalism ===
The broad position of "functionalism" can be articulated in many different varieties. The first formulation of a functionalist theory of mind was put forth by Hilary Putnam in the 1960s. This formulation, which is now called machine-state functionalism, or just machine functionalism, was inspired by the analogies which Putnam and others noted between the mind and the theoretical "machines" or computers capable of computing any given algorithm which were developed by Alan Turing (called Turing machines). Putnam himself, by the mid-1970s, had begun questioning this position. The beginning of his opposition to machine-state functionalism can be read about in his Twin Earth thought experiment.
In non-technical terms, a Turing machine is not a physical object, but rather an abstract machine built upon a mathematical model. Typically, a Turing Machine has a horizontal tape divided into rectangular cells arranged from left to right. The tape itself is infinite in length, and each cell may contain a symbol. The symbols used for any given "machine" can vary. The machine has a read-write head that scans cells and moves in left and right directions. The action of the machine is determined by the symbol in the cell being scanned and a table of transition rules that serve as the machine's programming. Because of the infinite tape, a traditional Turing Machine has an infinite amount of time to compute any particular function or any number of functions. In the below example, each cell is either blank (B) or has a 1 written on it. These are the inputs to the machine. The possible outputs are:
Halt: Do nothing.
R: move one square to the right.
L: move one square to the left.
B: erase whatever is on the square.
1: erase whatever is on the square and print a '1.
An extremely simple example of a Turing machine which
writes out the sequence '111' after scanning three blank squares and then stops as specified by the following machine table:
This table states that if the machine is in state one and scans a blank square (B), it will print a 1 and remain in state one. If it is in state one and reads a 1, it will move one square to the right and also go into state two. If it is in state two and reads a B, it will print a 1 and stay in state two. If it is in state two and reads a 1, it will move one square to the right and go into state three. If it is in state three and reads a B, it prints a 1 and remains in state three. Finally, if it is in state three and reads a 1, then it will stay in state three.
The essential point to consider here is the nature of the states of the Turing machine. Each state can be defined exclusively in terms of its relations to the other states as well as inputs and outputs. State one, for example, is simply the state in which the machine, if it reads a B, writes a 1 and stays in that state, and in which, if it reads a 1, it moves one square to the right and goes into a different state. This is the functional definition of state one; it is its causal role in the overall system. The details of how it accomplishes what it accomplishes and of its material constitution are completely irrelevant.
The above point is critical to an understanding of machine-state functionalism. Since Turing machines are not required to be physical systems, "anything capable of going through a succession of states in time can be a Turing machine". Because biological organisms “go through a succession of states in time”, any such organisms could also be equivalent to Turing machines.
According to machine-state functionalism, the nature of a mental state is just like the nature of the Turing machine states described above. If one can show the rational functioning and computing skills of these machines to be comparable to the rational functioning and computing skills of human beings, it follows that Turing machine behavior closely resembles that of human beings. Therefore, it is not a particular physical-chemical composition responsible for the particular machine or mental state, it is the programming rules which produce the effects that are responsible. To put it another way, any rational preference is due to the rules being followed, not to the specific material composition of the agent.
=== Psycho-functionalism ===
A second form of functionalism is based on the rejection of behaviorist theories in psychology and their replacement with empirical cognitive models of the mind. This view is most closely associated with Jerry Fodor and Zenon Pylyshyn and has been labeled psycho-functionalism.
The fundamental idea of psycho-functionalism is that psychology is an irreducibly complex science and that the terms that we use to describe the entities and properties of the mind in our best psychological theories cannot be redefined in terms of simple behavioral dispositions, and further, that such a redefinition would not be desirable or salient were it achievable. Psychofunctionalists view psychology as employing the same sorts of irreducibly teleological or purposive explanations as the biological sciences. Thus, for example, the function or role of the heart is to pump blood, that of the kidney is to filter it and to maintain certain chemical balances and so on—this is what accounts for the purposes of scientific explanation and taxonomy. There may be an infinite variety of physical realizations for all of the mechanisms, but what is important is only their role in the overall biological theory. In an analogous manner, the role of mental states, such as belief and desire, is determined by the functional or causal role that is designated for them within our best scientific psychological theory. If some mental state which is postulated by folk psychology (e.g. hysteria) is determined not to have any fundamental role in cognitive psychological explanation, then that particular state may be considered not to exist.
On the other hand, if it turns out that there are states which theoretical cognitive psychology posits as necessary for explanation of human behavior but which are not foreseen by ordinary folk psychological language, then these entities or states exist.
=== Analytic functionalism ===
A third form of functionalism is concerned with the meanings of theoretical terms in general. This view is most closely associated with David Lewis and is often referred to as analytic functionalism or conceptual functionalism. The basic idea of analytic functionalism is that theoretical terms are implicitly defined by the theories in whose formulation they occur and not by intrinsic properties of the phonemes they comprise. In the case of ordinary language terms, such as "belief", "desire", or "hunger", the idea is that such terms get their meanings from our common-sense "folk psychological" theories about them, but that such conceptualizations are not sufficient to withstand the rigor imposed by materialistic theories of reality and causality. Such terms are subject to conceptual analyses which take something like the following form:
Mental state M is the state that is preconceived by P and causes Q.
For example, the state of pain is caused by sitting on a tack and causes loud cries, and higher order mental states of anger and resentment directed at the careless person who left a tack lying around. These sorts of functional definitions in terms of causal roles are claimed to be analytic and a priori truths about the submental states and the (largely fictitious) propositional attitudes they describe. Hence, its proponents are known as analytic or conceptual functionalists. The essential difference between analytic and psychofunctionalism is that the latter emphasizes the importance of laboratory observation and experimentation in the determination of which mental state terms and concepts are genuine and which functional identifications may be considered to be genuinely contingent and a posteriori identities. The former, on the other hand, claims that such identities are necessary and not subject to empirical scientific investigation.
=== Homuncular functionalism ===
Homuncular functionalism was developed largely by Daniel Dennett and has been advocated by William Lycan. It arose in response to the challenges that Ned Block's China Brain (a.k.a. Chinese nation) and John Searle's Chinese room thought experiments presented for the more traditional forms of functionalism (see below under "Criticism"). In attempting to overcome the conceptual difficulties that arose from the idea of a nation full of Chinese people wired together, each person working as a single neuron to produce in the wired-together whole the functional mental states of an individual mind, many functionalists simply bit the bullet, so to speak, and argued that such a Chinese nation would indeed possess all of the qualitative and intentional properties of a mind; i.e. it would become a sort of systemic or collective mind with propositional attitudes and other mental characteristics. Whatever the worth of this latter hypothesis, it was immediately objected that it entailed an unacceptable sort of mind-mind supervenience: the systemic mind which somehow emerged at the higher-level must necessarily supervene on the individual minds of each individual member of the Chinese nation, to stick to Block's formulation. But this would seem to put into serious doubt, if not directly contradict, the fundamental idea of the supervenience thesis: there can be no change in the mental realm without some change in the underlying physical substratum. This can be easily seen if we label the set of mental facts that occur at the higher-level M1 and the set of mental facts that occur at the lower-level M2. Then M1 and M2 both supervene on the physical facts, but a change of M1 to M2 (say) could occur without any change in these facts.
Since mind-mind supervenience seemed to have become acceptable in functionalist circles, it seemed to some that the only way to resolve the puzzle was to postulate the existence of an entire hierarchical series of mind levels (analogous to homunculi) which became less and less sophisticated in terms of functional organization and physical composition all the way down to the level of the physico-mechanical neuron or group of neurons. The homunculi at each level, on this view, have authentic mental properties but become simpler and less intelligent as one works one's way down the hierarchy.
=== Mechanistic functionalism ===
Mechanistic functionalism, originally formulated and defended by Gualtiero Piccinini and Carl Gillett independently, augments previous functionalist accounts of mental states by maintaining that any psychological explanation must be rendered in mechanistic terms. That is, instead of mental states receiving a purely functional explanation in terms of their relations to other mental states, like those listed above, functions are seen as playing only a part—the other part being played by structures— of the explanation of a given mental state.
A mechanistic explanation involves decomposing a given system, in this case a mental system, into its component physical parts, their activities or functions, and their combined organizational relations. On this account the mind remains a functional system, but one that is understood in mechanistic terms. This account remains a sort of functionalism because functional relations are still essential to mental states, but it is mechanistic because the functional relations are always manifestations of concrete structures—albeit structures understood at a certain level of abstraction. Functions are individuated and explained either in terms of the contributions they make to the given system or in teleological terms. If the functions are understood in teleological terms, then they may be characterized either etiologically or non-etiologically.
Mechanistic functionalism leads functionalism away from the traditional functionalist autonomy of psychology from neuroscience and towards integrating psychology and neuroscience. By providing an applicable framework for merging traditional psychological models with neurological data, mechanistic functionalism may be understood as reconciling the functionalist theory of mind with neurological accounts of how the brain actually works. This is due to the fact that mechanistic explanations of function attempt to provide an account of how functional states (mental states) are physically realized through neurological mechanisms.
== Physicalism ==
There is much confusion about the sort of relationship that is claimed to exist (or not exist) between the general thesis of functionalism and physicalism. It has often been claimed that functionalism somehow "disproves" or falsifies physicalism tout court (i.e. without further explanation or description). On the other hand, most philosophers of mind who are functionalists claim to be physicalists—indeed, some of them, such as David Lewis, have claimed to be strict reductionist-type physicalists.
Functionalism is fundamentally what Ned Block has called a broadly metaphysical thesis as opposed to a narrowly ontological one. That is, functionalism is not so much concerned with what there is than with what it is that characterizes a certain type of mental state, e.g. pain, as the type of state that it is. Previous attempts to answer the mind-body problem have all tried to resolve it by answering both questions: dualism says there are two substances and that mental states are characterized by their immateriality; behaviorism claimed that there was one substance and that mental states were behavioral disposition; physicalism asserted the existence of just one substance and characterized the mental states as physical states (as in "pain = C-fiber firings").
On this understanding, type physicalism can be seen as incompatible with functionalism, since it claims that what characterizes mental states (e.g. pain) is that they are physical in nature, while functionalism says that what characterizes pain is its functional/causal role and its relationship with yelling "ouch", etc. However, any weaker sort of physicalism which makes the simple ontological claim that everything that exists is made up of physical matter is perfectly compatible with functionalism. Moreover, most functionalists who are physicalists require that the properties that are quantified over in functional definitions be physical properties. Hence, they are physicalists, even though the general thesis of functionalism itself does not commit them to being so.
In the case of David Lewis, there is a distinction in the concepts of "having pain" (a rigid designator true of the same things in all possible worlds) and just "pain" (a non-rigid designator). Pain, for Lewis, stands for something like the definite description "the state with the causal role x". The referent of the description in humans is a type of brain state to be determined by science. The referent among silicon-based life forms is something else. The referent of the description among angels is some immaterial, non-physical state. For Lewis, therefore, local type-physical reductions are possible and compatible with conceptual functionalism. (See also Lewis's mad pain and Martian pain.) There seems to be some confusion between types and tokens that needs to be cleared up in the functionalist analysis.
== Criticism ==
In a 2020 PhilPapers survey, functionalism emerged as the most popular theory, with 33% of respondents accepting or leaning towards it, followed by dualism at 22%, and identity theory at 13%. Nevertheless, functionalism has counter-intuitive implications, often criticized using thought experiments.
=== China brain ===
Ned Block argues against the functionalist proposal of multiple realizability, where hardware implementation is irrelevant because only the functional level is important. The "China brain" or "Chinese nation" thought experiment involves supposing that the entire nation of China systematically organizes itself to operate just like a brain, with each individual acting as a neuron. (The tremendous difference in speed of operation of each unit is not addressed.). According to functionalism, so long as the people are performing the proper functional roles, with the proper causal relations between inputs and outputs, the system will be a real mind, with mental states, consciousness, and so on. Ned Block contends that this scenario is implausible, suggesting that there must be a flaw in the functionalist thesis if it allows such a system to be considered a legitimate mind.
Some functionalists believe China would have qualia but that due to the size it is impossible to imagine China being conscious. Indeed, it may be the case that we are constrained by our theory of mind and will never be able to understand what Chinese-nation consciousness is like. Therefore, if functionalism is true, either qualia will be present in any system that performs the correct functions, regardless of its physical structure, or qualia do not exist at all and are merely illusions.
=== The Chinese room ===
The Chinese room argument by John Searle is a direct attack on the claim that thought can be represented as a set of functions. The thought experiment asserts that it is possible to mimic intelligent action without any interpretation or understanding through the use of a purely functional system. In short, Searle describes a person who only speaks English who is in a room with only Chinese symbols in baskets and a rule book in English for moving the symbols around. The person is then ordered by people outside of the room to follow the rule book for sending certain symbols out of the room when given certain symbols. Further suppose that the people outside of the room are Chinese speakers and are communicating with the person inside via the Chinese symbols. According to Searle, it would be absurd to claim that the English speaker inside knows Chinese simply based on these syntactic processes. This thought experiment attempts to show that systems which operate merely on syntactic processes (inputs and outputs, based on algorithms) cannot realize any semantics (meaning) or intentionality (aboutness). Thus, Searle attacks the idea that thought can be equated with following a set of syntactic rules.
One common response to Searle's thought experiment is that there is a form of mental activity going on at a higher level than the man, and that the whole system needs to be considered. This suggests that the system does understand Chinese, even though the man in the room doesn't. The man is analogized to a CPU in a computer system. In response, Searle suggested the man in the room could memorize all the rules and symbol relations. Even if he internalized all the rules and performed the operations in his mind, he would still be manipulating symbols without understanding their meaning, according to Searle. Some critics consider that this symbol-manipulating subsystem of the brain can be viewed as a kind of separate, virtual mind, which would understand Chinese.
Functionalists also argue that it would be possible in theory to make a system that emulates on digital hardware each neuron of the brain of someone who understands Chinese, and that such a brain emulation would have the same mental processes, and would thus understand Chinese.
=== Inverted spectrum ===
Another main criticism of functionalism is the inverted spectrum or inverted qualia scenario, most specifically proposed as an objection to functionalism by Ned Block. This thought experiment involves supposing that there is a person, call her Jane, that is born with a condition which makes her see the opposite spectrum of light that is normally perceived. Unlike normal people, Jane sees the color violet as yellow, orange as blue, and so forth. So, suppose, for example, that you and Jane are looking at the same orange. While you perceive the fruit as colored orange, Jane sees it as colored blue. However, when asked what color the piece of fruit is, both you and Jane will report "orange". In fact, one can see that all of your behavioral as well as functional relations to colors will be the same. Jane will, for example, properly obey traffic signs just as any other person would, even though this involves the color perception. Therefore, the argument goes, since there can be two people who are functionally identical, yet have different mental states (differing in their qualitative or phenomenological aspects), functionalism is not robust enough to explain individual differences in qualia.
According to David Chalmers, all "functionally isomorphic" systems (those with the same "fine-grained functional organization", i.e., the same information processing) will have qualitatively identical conscious experiences. He calls this the principle of organizational invariance. For example, it implies that a silicon chip that is functionally isomorphic to a brain will have the same perception of the color red, given the same sensory inputs. He proposed the thought experiment of the "dancing qualia" to demonstrate it. It is a reductio ad absurdum argument that starts by supposing that two such systems have different qualia in the same situation. It involves a switch that alternates between a chunk of brain that causes the perception of red, and a functionally isomorphic system that causes the perception of blue, for example implemented as a silicon chip. Since both perform the same function within the brain, the subject would not notice any change during the switch. Chalmers argues that this would be highly implausible if the qualia were truly switching between red and blue, hence the contradiction. Therefore, he concludes that the dancing qualia is impossible in practice, and the equivalent digital system would not only experience qualia, but it would have conscious experiences that are qualitatively identical to those of the biological system (e.g., seeing the same color). He also proposed a similar thought experiment, named the fading qualia, that argues that it is not possible for the qualia to fade when each biological neuron is replaced by a functional equivalent.
A related critique of the inverted spectrum argument is that it assumes that mental states (differing in their qualitative or phenomenological aspects) can be independent of the functional relations in the brain. Thus, it begs the question of functional mental states: its assumption denies the possibility of functionalism itself, without offering any independent justification for doing so (functionalism says that mental states are produced by the functional relations in the brain). This same type of problem—that there is no argument, just an antithetical assumption at their base—can also be said of both the Chinese room and the Chinese nation arguments.
=== Twin Earth ===
The Twin Earth thought experiment, introduced by Hilary Putnam, is responsible for one of the main arguments used against functionalism, although it was originally intended as an argument against semantic internalism. The thought experiment is simple and runs as follows. Imagine a Twin Earth which is identical to Earth in every way but one: water does not have the chemical structure H2O, but rather some other structure, say XYZ. It is critical, however, to note that XYZ on Twin Earth is still called "water" and exhibits all the same macro-level properties that H2O exhibits on Earth (i.e., XYZ is also a clear drinkable liquid that is in lakes, rivers, and so on). Since these worlds are identical in every way except in the underlying chemical structure of water, you and your Twin Earth doppelgänger see exactly the same things, meet exactly the same people, have exactly the same jobs, behave exactly the same way, and so on. In other words, since you share the same inputs, outputs, and relations between other mental states, you are functional duplicates. So, for example, you both believe that water is wet. However, the content of your mental state of believing that water is wet differs from your duplicate's because your belief is of H2O, while your duplicate's is of XYZ. Therefore, so the argument goes, since two people can be functionally identical, yet have different mental states, functionalism cannot sufficiently account for all mental states.
Most defenders of functionalism initially responded to this argument by attempting to maintain a sharp distinction between internal and external content. The internal contents of propositional attitudes, for example, would consist exclusively in those aspects of them which have no relation with the external world and which bear the necessary functional/causal properties that allow for relations with other internal mental states. Since no one has yet been able to formulate a clear basis or justification for the existence of such a distinction in mental contents, however, this idea has generally been abandoned in favor of externalist causal theories of mental contents (also known as informational semantics). Such a position is represented, for example, by Jerry Fodor's account of an "asymmetric causal theory" of mental content. This view simply entails the modification of functionalism to include within its scope a very broad interpretation of inputs and outputs to include the objects that are the causes of mental representations in the external world.
The twin earth argument hinges on the assumption that experience with an imitation water would cause a different mental state than experience with natural water. However, since no one would notice the difference between the two waters, this assumption is likely false. Further, this basic assumption is directly antithetical to functionalism; and, thereby, the twin earth argument does not constitute a genuine argument: as this assumption entails a flat denial of functionalism itself (which would say that the two waters would not produce different mental states, because the functional relationships would remain unchanged).
=== Meaning holism ===
Another common criticism of functionalism is that it implies a radical form of semantic holism. Block and Fodor referred to this as the damn/darn problem. The difference between saying "damn" or "darn" when one smashes one's finger with a hammer can be mentally significant. But since these outputs are, according to functionalism, related to many (if not all) internal mental states, two people who experience the same pain and react with different outputs must share little (perhaps nothing) in common in any of their mental states. But this is counterintuitive; it seems clear that two people share something significant in their mental states of being in pain if they both smash their finger with a hammer, whether or not they utter the same word when they cry out in pain.
Another possible solution to this problem is to adopt a moderate (or molecularist) form of holism. But even if this succeeds in the case of pain, in the case of beliefs and meaning, it faces the difficulty of formulating a distinction between relevant and non-relevant contents (which can be difficult to do without invoking an analytic–synthetic distinction, as many seek to avoid).
=== Triviality arguments ===
According to Ned Block, if functionalism is to avoid the chauvinism of type-physicalism, it becomes overly liberal in "ascribing mental properties to things that do not in fact have them". As an example, he proposes that the economy of Bolivia might be organized such that the economic states, inputs, and outputs would be isomorphic to a person under some bizarre mapping from mental to economic variables.
Hilary Putnam, John Searle, and others have offered further arguments that functionalism is trivial, i.e. that the internal structures functionalism tries to discuss turn out to be present everywhere, so that either functionalism turns out to reduce to behaviorism, or to complete triviality and therefore a form of panpsychism. These arguments typically use the assumption that physics leads to a progression of unique states, and that functionalist realization is present whenever there is a mapping from the proposed set of mental states to physical states of the system. Given that the states of a physical system are always at least slightly unique, such a mapping will always exist, so any system is a mind. Formulations of functionalism which stipulate absolute requirements on interaction with external objects (external to the functional account, meaning not defined functionally) are reduced to behaviorism instead of absolute triviality, because the input-output behavior is still required.
Peter Godfrey-Smith has argued further that such formulations can still be reduced to triviality if they accept a somewhat innocent-seeming additional assumption. The assumption is that adding a transducer layer, that is, an input-output system, to an object should not change whether that object has mental states. The transducer layer is restricted to producing behavior according to a simple mapping, such as a lookup table, from inputs to actions on the system, and from the state of the system to outputs. However, since the system will be in unique states at each moment and at each possible input, such a mapping will always exist so there will be a transducer layer which will produce whatever physical behavior is desired.
Godfrey-Smith believes that these problems can be addressed using causality, but that it may be necessary to posit a continuum between objects being minds and not being minds rather than an absolute distinction. Furthermore, constraining the mappings seems to require either consideration of the external behavior as in behaviorism, or discussion of the internal structure of the realization as in identity theory; and though multiple realizability does not seem to be lost, the functionalist claim of the autonomy of high-level functional description becomes questionable.
== See also ==
== References ==
== Further reading ==
Armstrong, D.M. (1968). A Materialistic Theory of the Mind. London: RKP.
Baron-Cohen S.; Leslie A.; Frith U. (1985). "Does the Autistic Child Have a "Theory of Mind"?". Cognition. 21 (1): 37–46. doi:10.1016/0010-0277(85)90022-8. PMID 2934210. S2CID 14955234.
Block, Ned. (1980a). "Introduction: What Is Functionalism?" in Readings in Philosophy of Psychology. Cambridge, MA: Harvard University Press.
Block, Ned. (1980b). "Troubles With Functionalism", in Block (1980a).
Block, Ned. (1994). Qualia. In S. Guttenplan (ed), A Companion to Philosophy of Mind. Oxford: Blackwell
Block, Ned (1996). "What is functionalism?" (PDF). a revised version of the entry on functionalism in The Encyclopedia of Philosophy Supplement, Macmillan.
Block, Ned and Fodor, J. (1972). "What Psychological States Are Not". Philosophical Review 81.
Chalmers, David. (1996). The Conscious Mind. Oxford: Oxford University Press.
DeLancey, C. (2002). "Passionate Engines - What Emotions Reveal about the Mind and Artificial Intelligence." Oxford: Oxford University Press.
Dennett, D. (1990) Quining Qualia. In W. Lycan, (ed), Mind and Cognition. Oxford: Blackwells
Levin, Janet. (2004). "Functionalism", The Stanford Encyclopedia of Philosophy (Fall 2004 Edition), E. Zalta (ed.). (online)
Lewis, David. (1966). "An Argument for the Identity Theory". Journal of Philosophy 63.
Lewis, David. (1980). "Mad Pain and Martian Pain". In Block (1980a) Vol. 1, pp. 216–222.
Lycan, W. (1987) Consciousness. Cambridge, MA: MIT Press.
Mandik, Pete. (1998). Fine-grained Supervience, Cognitive Neuroscience, and the Future of Functionalism.
Marr, D. (1982). Vision: A Computational Approach. San Francisco: Freeman & Co.
Polgar, T. D. (2008). "Functionalism". The Internet Encyclopedia of Philosophy.
Putnam, Hilary. (1960). "Minds and Machines". Reprinted in Putnam (1975a).
Putnam, Hilary. (1967). "Psychological Predicates". In Art, Mind, and Religion, W.H. Capitan and D.D. Merrill (eds.), pp. 37–48. (Later published as "The Nature of Mental States" in Putnam (1975a).
Putnam, Hilary. (1975a). Mind, Language, and Reality. Cambridge: CUP.
Searle, John (1980). "Minds, Brains and Programs" (PDF). Behavioral and Brain Sciences. 3 (3): 417–424. doi:10.1017/s0140525x00005756. S2CID 55303721.
Smart, J.J.C. (1959). "Sensations and Brain Processes". Philosophical Review LXVIII.
== External links ==
Levin, Janet. "Functionalism". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
Eliasmith, Chris (2004). "Dictionary of the Philosophy of Mind - Functionalism". University of Waterloo (Canada). Archived from the original on 2012-06-23. | Wikipedia/Functionalism_(philosophy_of_mind) |
A theory of art is intended to contrast with a definition of art. Traditionally, definitions are composed of necessary and sufficient conditions, and a single counterexample overthrows such a definition. Theorizing about art, on the other hand, is analogous to a theory of a natural phenomenon like gravity. In fact, the intent behind a theory of art is to treat art as a natural phenomenon that should be investigated like any other. The question of whether one can speak of a theory of art without employing a concept of art is also discussed below.
The motivation behind seeking a theory, rather than a definition, is that our best minds have not been able to find definitions without counterexamples. The term "definition" assumes there are concepts, in something along Platonic lines, and a definition is an attempt to reach in and pluck out the essence of the concept and also assumes that at least some people have intellectual access to these concepts. In contrast, a 'conception' is an individual attempt to grasp at the putative essence behind this common term while nobody has "access" to the concept.
A theory of art presumes that each of us employs different conceptions of this unattainable art concept and as a result we must resort to worldly human investigation.
== Aesthetic response ==
Theories of aesthetic response or functional theories of art are in many ways the most intuitive theories of art. At its base, the term "aesthetic" refers to a type of phenomenal experience, and aesthetic definitions identify artworks with artifacts intended to produce aesthetic experiences. Nature can be beautiful and it can produce aesthetic experiences, but nature does not possess the intentional function of producing those experiences. For such a function, an intention is necessary, and thus agency – the artist.
Monroe Beardsley is commonly associated with aesthetic definitions of art. In Beardsley's words, something is art just in case it is "either an arrangement of conditions intended to be capable of affording an experience with marked aesthetic character or (incidentally) an arrangement belonging to a class or type of arrangements that is typically intended to have this capacity" (The aesthetic point of view: selected essays, 1982, 299). Painters arrange "conditions" in the paint/canvas medium, and dancers arrange the "conditions" of their bodily medium, for example. According to Beardsley's first disjunct, art has an intended aesthetic function, but not all artworks succeed in producing aesthetic experiences. The second disjunct allows for artworks that were intended to have this capacity, but failed at it (bad art).
Marcel Duchamp's Fountain is the paradigmatic counterexample to aesthetic definitions of art. Such works are said to be counterexamples because they are artworks that do not possess an intended aesthetic function. Beardsley replies that either such works are not art or they are "comments on art" (1983): "To classify them [Fountain and the like] as artworks just because they make comments on art would be to classify a lot of dull and sometimes unintelligible magazine articles and newspaper reviews as artworks" (p. 25). This response has been widely considered inadequate (REF). It is either question-begging or it relies on an arbitrary distinction between artworks and commentaries on artworks. A great many art theorists today consider aesthetic definitions of art to be extensionally inadequate, primarily because of artworks in the style of Duchamp.
== Formalist ==
The formalist theory of art asserts that we should focus only on the formal properties of art—the "form", not the "content". Those formal properties might include, for the visual arts, color, shape, and line, and, for the musical arts, rhythm and harmony. Formalists do not deny that works of art might have content, representation, or narrative--rather, they deny that those things are relevant in our appreciation or understanding of art.
== Institutional ==
The institutional theory of art is a theory about the nature of art that holds that an object can only become art in the context of the institution known as "the art world".
Addressing the issue of what makes, for example, Marcel Duchamp's "readymades" art, or why a pile of Brillo cartons in a supermarket is not art, whereas Andy Warhol's famous Brillo Boxes (a pile of Brillo carton replicas) is, the art critic and philosopher Arthur Danto wrote in his 1964 essay "The Artworld":
To see something as art requires something the eye cannot decry—an atmosphere of artistic theory, a knowledge of the history of art: an artworld.
According to Robert J. Yanal, Danto's essay, in which he coined the term artworld, outlined the first institutional theory of art.
Versions of the institutional theory were formulated more explicitly by George Dickie in his article "Defining Art" (American Philosophical Quarterly, 1969) and his books Aesthetics: An Introduction (1971) and Art and the Aesthetic: An Institutional Analysis (1974). An early version of Dickie's institutional theory can be summed up in the following definition of work of art from Aesthetics: An Introduction:
A work of art in the classificatory sense is 1) an artifact 2) on which some person or persons acting on behalf of a certain social institution (the artworld) has conferred the status of candidate for appreciation.
Dickie has reformulated his theory in several books and articles. Other philosophers of art have criticized his definitions as being circular.
== Historical ==
Historical theories of art hold that for something to be art, it must bear some relation to existing works of art. For new works to be art, they must be similar or relate to previously established artworks. Such a definition raises the question of where this inherited status originated. That is why historical definitions of art must also include a disjunct for first art: Something is art if it possesses a historical relation to previous artworks, or is first art.
The philosopher primarily associated with the historical definition of art is Jerrold Levinson (1979). For Levinson, "a work of art is a thing intended for regard-as-a-work-of-art: regard in any of the ways works of art existing prior to it have been correctly regarded" (1979, p. 234). Levinson further clarifies that by "intends for" he means: "[M]akes, appropriates or conceives for the purpose of'" (1979, p. 236). Some of these manners for regard (at around the present time) are: to be regarded with full attention, to be regarded contemplatively, to be regarded with special notice to appearance, to be regarded with "emotional openness" (1979, p. 237). If an object is not intended for regard in any of the established ways, then it is not art.
== Anti-essentialist ==
Some art theorists have proposed that the attempt to define art must be abandoned and have instead urged an anti-essentialist theory of art. In 'The Role of Theory in Aesthetics' (1956), Morris Weitz famously argues that individually necessary and jointly sufficient conditions will never be forthcoming for the concept 'art' because it is an "open concept". Weitz describes open concepts as those whose "conditions of application are emendable and corrigible" (1956, p. 31). In the case of borderline cases of art and prima facie counterexamples, open concepts "call for some sort of decision on our part to extend the use of the concept to cover this, or to close the concept and invent a new one to deal with the new case and its new property" (p. 31 ital. in original). The question of whether a new artifact is art "is not factual, but rather a decision problem, where the verdict turns on whether or not we enlarge our set of conditions for applying the concept" (p. 32). For Weitz, it is "the very expansive, adventurous character of art, its ever-present changes and novel creations", that makes the concept impossible to capture in a classical definition (as some static univocal essence).
While anti-essentialism was never formally defeated, it was challenged, and the debate over anti-essentialist theories was subsequently swept away by seemingly better essentialist definitions. Commenting after Weitz, Berys Gaut revived anti-essentialism in the philosophy of art with his paper '"Art" as a Cluster Concept' (2000). Cluster concepts are composed of criteria that contribute to art status but are not individually necessary for art status. There is one exception: Artworks are created by agents, and so being an artifact is a necessary property for being an artwork. Gaut (2005) offers a set of ten criteria that contribute to art status:
(i) possessing positive aesthetic qualities (I employ the notion of positive aesthetic qualities here in a narrow sense, comprising beauty and its subspecies);
(ii) being expressive of emotion;
(iii) being intellectually challenging;
(iv) being formally complex and coherent;
(v) having a capacity to convey complex meanings;
(vi) exhibiting an individual point of view;
(vii) being an exercise of creative imagination;
(viii) being an artifact or performance that is the product of a high degree of skill;
(ix) belonging to an established artistic form; and
(x) being the product of an intention to make a work of art. (274)
Satisfying all ten criteria would be sufficient for art, as might any subset formed by nine criteria (this is a consequence of the fact that none of the ten properties is necessary). For example, consider two of Gaut's criteria: "possessing aesthetic merit" and "being expressive of emotion" (200, p. 28). Neither of these criteria is necessary for art status, but both are parts of subsets of these ten criteria that are sufficient for art status. Gaut's definition also allows for many subsets with less than nine criteria to be sufficient for art status, which leads to a highly pluralistic theory of art.
In 2021, the philosopher Jason Josephson Storm defended anti-essentialist definitions of art as part of a broader analysis of the role of macro-categories in the human sciences. Specifically, he argued that most essentialist attempts to answer Weitz's original argument fail because the criteria they propose to define art are not themselves present or identical across cultures.: 64 Storm went further and argued that Weitz's appeal to family resemblance to define art without essentialism is ultimately circular because it does not explain why similarities between "art" across cultures are relevant to defining it even anti-essentially.: 77–82 Instead, Storm applied a theory of social kinds to the category "art" that emphasized how different forms of art fulfill different "cultural niches.": 124
The theory of art is also impacted by a philosophical turn in thinking, not only exemplified by the aesthetics of Kant but is tied more closely to ontology and metaphysics in terms of the reflections of Heidegger on the essence of modern technology and the implications it has on all beings that are reduced to what he calls 'standing reserve', and it is from this perspective on the question of being that he explored art beyond the history, theory, and criticism of artistic production as embodied for instance in his influential opus: The Origin of the Work of Art. This has had also an impact on architectural thinking in its philosophical roots.
== Aesthetic creation ==
Zangwill describes the aesthetic-creation theory of art as a theory of "how art comes to be produced" (p. 167) and an "artist-based" theory. Zangwill distinguishes three phases in the production of a work of art:
[F]irst, there is the insight that by creating certain nonaesthetic properties, certain aesthetic properties will be realized; second, there is the intention to realize the aesthetic properties in the nonaesthetic properties, as envisaged in the insight; and, third, there is the more or less successful action of realizing the aesthetic properties in the nonaesthetic properties, an envisaged in the insight and intention. (45)
In the creation of an artwork, the insight plays a causal role in bringing about actions sufficient for realizing particular aesthetic properties. Zangwill does not describe this relation in detail, but only says it is "because of" this insight that the aesthetic properties are created.
Aesthetic properties are instantiated by nonaesthetic properties that "include physical properties, such as shape and size, and secondary qualities, such as colours or sounds." (37) Zangwill says that aesthetic properties supervene on the nonaesthetic properties: it is because of the particular nonaesthetic properties it has that the work possesses certain aesthetic properties (and not the other way around).
== What is "art"? ==
Since art often depicts functional purposes and sometimes has no function other than to convey or communicate an idea, then how best to define the term "art" is a subject of constant contention; many books and journal articles have been published arguing over even the basics of what we mean by the term "art". Theodor Adorno claimed in his Aesthetic Theory (1969), "It is self-evident that nothing concerning art is self-evident." Artists, philosophers, anthropologists, psychologists, and programmers all use the notion of art in their respective fields and give it operational definitions that vary considerably. Furthermore, it is clear that even the basic meaning of the term "art" has changed several times over the centuries, and has continued to evolve during the 20th century as well.
The main recent sense of the word "art" is roughly as an abbreviation for "fine art". Here we mean that skill is being used to express the artist's creativity, engage the audience's aesthetic sensibilities, or draw the audience toward consideration of the "finer" things. Often, if the skill is being used in a functional object, people will consider it a craft instead of art, a suggestion that is highly disputed by many contemporary craft thinkers. Likewise, if the skill is being used in a commercial or industrial way, it may be considered design instead of art, or contrariwise, these may be defended as art forms, perhaps called applied art. Some thinkers, for instance, have argued that the difference between fine art and applied art has more to do with the actual function of the object than any clear definitional difference.
Even as late as 1912, it was normal in the West to assume that all art aims at beauty, and thus that anything that was not trying to be beautiful could not count as art. The cubists, dadaists, Stravinsky, and many later art movements struggled against this conception that beauty was central to the definition of art, with such success that, according to Danto, "Beauty had disappeared not only from the advanced art of the 1960s but from the advanced philosophy of art of that decade as well." Perhaps some notion like "expression" (in Croce's theories) or "counter-environment" (in McLuhan's theory) can replace the previous role of beauty. Brian Massumi brought back "beauty" into consideration together with "expression". Another view, as important to the philosophy of art as "beauty", is that of the "sublime", elaborated upon in the twentieth century by the postmodern philosopher Jean-François Lyotard. A further approach, elaborated by André Malraux in works such as The Voices of Silence, is that art is fundamentally a response to a metaphysical question ("Art", he writes, "is an 'anti-destiny'"). Malraux argues that, while art has sometimes been oriented toward beauty and the sublime (principally in post-Renaissance European art), these qualities, as the wider history of art demonstrates, are by no means essential to it.
Perhaps (as in Kennick's theory) no definition of art is possible anymore. Perhaps art should be thought of as a cluster of related concepts in a Wittgensteinian fashion (as in Weitz or Beuys). Another approach is to say that "art" is basically a sociological category, that whatever art schools, museums, and artists define as art is considered art regardless of formal definitions. This "institutional definition of art" (see also Institutional Critique) has been championed by George Dickie. Most people did not consider the depiction of a store-bought urinal or Brillo Box to be art until Marcel Duchamp and Andy Warhol (respectively) placed them in the context of art (i.e., the art gallery), which then provided the association of these objects with the associations that define art.
Proceduralists often suggest that it is the process by which a work of art is created or viewed that makes it art, not any inherent feature of an object, or how well received it is by the institutions of the art world after its introduction to society at large. If a poet writes down several lines, intending them as a poem, the very procedure by which it is written makes it a poem. Whereas if a journalist writes exactly the same set of words, intending them as shorthand notes to help him write a longer article later, these would not be a poem. Leo Tolstoy, on the other hand, claims in his What is art? (1897) that what decides whether something is art is how it is experienced by its audience, not by the intention of its creator. Functionalists like Monroe Beardsley argue that whether a piece counts as art depends on what function it plays in a particular context; the same Greek vase may play a nonartistic function in one context (carrying wine) and an artistic function in another context (helping us appreciate the beauty of the human figure).
Marxist attempts to define art focus on its place in the mode of production, such as in Walter Benjamin's essay The Author as Producer, and/or its political role in class struggle. Revising some concepts of the Marxist philosopher Louis Althusser, Gary Tedman defines art in terms of social reproduction of the relations of production on the aesthetic level.
== What should art be like? ==
Many goals have been argued for art, and aestheticians often argue that some goal or another is superior in some way. Clement Greenberg, for instance, argued in 1960 that each artistic medium should seek that which makes it unique among the possible mediums and then purify itself of anything other than expression of its own uniqueness as a form. The Dadaist Tristan Tzara on the other hand saw the function of art in 1918 as the destruction of a mad social order. "We must sweep and clean. Affirm the cleanliness of the individual after the state of madness, aggressive complete madness of a world abandoned to the hands of bandits." Formal goals, creative goals, self-expression, political goals, spiritual goals, philosophical goals, and even more perceptual or aesthetic goals have all been popular pictures of what art should be like.
== The value of art ==
Tolstoy defined art as the following: "Art is a human activity consisting in this, that one man consciously, by means of certain external signs, hands on to others feelings he has lived through, and that other people are infected by these feelings and also experience them." However, this definition is merely a starting point for his theory of art's value. To some extent, the value of art, for Tolstoy, is one with the value of empathy. However, sometimes empathy is not of value. In chapter fifteen of What Is Art?, Tolstoy says that some feelings are good, but others are bad, and so art is only valuable when it generates empathy or shared feeling for good feelings. For example, Tolstoy asserts that empathy for decadent members of the ruling class makes society worse, rather than better. In chapter sixteen, he asserts that the best art is "universal art" that expresses simple and accessible positive feeling.
An argument for the value of art, used in the fictional work The Hitchhikers Guide to the Galaxy, proceeds that, if some external force presenting imminent destruction of Earth asked humanity what its value was—what should humanity's response be? The argument continues that the only justification humanity could give for its continued existence would be the past creation and continued creation of things like a Shakespeare play, a Rembrandt painting or a Bach concerto. The suggestion is that these are the things of value that define humanity. Whatever one might think of this claim — and it does seem to undervalue the many other achievements of which human beings have shown themselves capable, both individually and collectively — it is true that art appears to possess a special capacity to endure ("live on") beyond the moment of its birth, in many cases for centuries or millennia. This capacity of art to endure over time — what precisely it is and how it operates — has been widely neglected in modern aesthetics.
== Set theory of art ==
A set theory of art has been underlined in according to the notion that everything is art. Here - higher than such states is proposed while lower than such states is developed for reference; thus showing that art theory is sprung up to guard against complacency.
Everything is art.
A set example of this would be an eternal set large enough to incorporate everything; with a work of art-example given as Ben Vautier's 'Universe'.
Everything and then some more is art (Everything+)
A set of this would be an eternal set incorporated in it a small circle; with a work of art-example given as Aronsson's 'Universe Orange' (which consists of a starmap of the universe bylining a natural-sized physical orange).
Everything that can be created (without practical use) is art (Everything-)
A set of this would be a shadow set (universe) much to the likelihood of a negative universe.
Everything that can be experienced is art (Everything--)
A set of this would be a finite set legally interacting with other sets without losing its position as premier set (the whole); with a work of art-example given as a picture of the 'Orion Nebula' (Unknown Artist).
Everything that exists, have been existing, and will ever exist is art (Everything++)
A set of this would be an infinite set consisting of every parallel universe; with a work of art-example given as Marvels 'Omniverse'.
== See also ==
Aesthetics, the philosophy of art
Poetics, the theory of poetry
== References == | Wikipedia/Institutional_theory_of_art |
An immediate inference is an inference which can be made from only one statement or proposition. For instance, from the statement "All toads are green", the immediate inference can be made that "no toads are not green" or "no toads are non-green" (Obverse). There are a number of immediate inferences which can validly be made using logical operations. There are also invalid immediate inferences which are syllogistic fallacies.
== Valid immediate inferences ==
=== Converse ===
Given a type E statement, "No S are P.", one can make the immediate inference that "No P are S" which is the converse of the given statement.
Given a type I statement, "Some S are P.", one can make the immediate inference that "Some P are S" which is the converse of the given statement.
=== Obverse ===
Given a type A statement, "All S are P.", one can make the immediate inference that "No S are non-P" which is the obverse of the given statement.
Given a type E statement, "No S are P.", one can make the immediate inference that "All S are non-P" which is the obverse of the given statement.
Given a type I statement, "Some S are P.", one can make the immediate inference that "Some S are not non-P" which is the obverse of the given statement.
Given a type O statement, "Some S are not P.", one can make the immediate inference that "Some S are non-P" which is the obverse of the given statement.
=== Contrapositive ===
Given a type A statement, "All S are P.", one can make the immediate inference that "All non-P are non-S" which is the contrapositive of the given statement.
Given a type O statement, "Some S are not P.", one can make the immediate inference that "Some non-P are not non-S" which is the contrapositive of the given statement.
== Invalid immediate inferences ==
Cases of the incorrect application of the contrary, subcontrary and subalternation relations (these hold in the traditional square of opposition, not the modern square of opposition) are syllogistic fallacies called illicit contrary, illicit subcontrary, and illicit subalternation, respectively. Cases of incorrect application of the contradictory relation (this relation holds in both the traditional and modern squares of opposition) are so infrequent, that an "illicit contradictory" fallacy is usually not recognized. The below shows examples of these cases.
=== Illicit contrary ===
It is false that all A are B, therefore no A are B.
It is false that no A are B, therefore all A are B.
=== Illicit subcontrary ===
Some A are B, therefore it is false that some A are not B.
Some A are not B, therefore some A are B.
=== Illicit subalternation and illicit superalternation ===
Some A are not B, therefore no A are B.
It is false that all A are B, therefore it is false that some A are B.
== See also ==
Transposition (logic)
Inverse (logic)
== References == | Wikipedia/Immediate_inference |
Inference is a live album by pianist Marilyn Crispell and saxophonist Tim Berne which was recorded during the Toronto Jazz Festival in 1992 and released on the Music & Arts label.
== Reception ==
The AllMusic review by Scott Yanow said "This set has plenty of stimulating and unpredictable interplay by the two giants (both of whom sound as if they have large ears). Music & Arts deserves thanks from the jazz world for making these noncommercial sounds available"
The Penguin Guide to Jazz notes that "This is not a great astonishment, not is it as good as it might be. Berne's curious bluester often overpowers what otherwise seems impressively vivid and reciprocal music-making."
== Track listing ==
All compositions by Marilyn Crispell except as indicated
"For Alto and Piano II" - 9:43
"Ho' Time" (Tim Berne) - 10:39
"Inference" (Berne) - 10:51
"Sorrow" - 9:31
"Bass Voodoo" (Berne) - 8:22
"Only Paradise" - 5:03
== Personnel ==
Tim Berne - alto saxophone
Marilyn Crispell - piano
== References == | Wikipedia/Inference_(album) |
In the general theory of relativity, the Einstein field equations (EFE; also known as Einstein's equations) relate the geometry of spacetime to the distribution of matter within it.
The equations were published by Albert Einstein in 1915 in the form of a tensor equation which related the local spacetime curvature (expressed by the Einstein tensor) with the local energy, momentum and stress within that spacetime (expressed by the stress–energy tensor).
Analogously to the way that electromagnetic fields are related to the distribution of charges and currents via Maxwell's equations, the EFE relate the spacetime geometry to the distribution of mass–energy, momentum and stress, that is, they determine the metric tensor of spacetime for a given arrangement of stress–energy–momentum in the spacetime. The relationship between the metric tensor and the Einstein tensor allows the EFE to be written as a set of nonlinear partial differential equations when used in this way. The solutions of the EFE are the components of the metric tensor. The inertial trajectories of particles and radiation (geodesics) in the resulting geometry are then calculated using the geodesic equation.
As well as implying local energy–momentum conservation, the EFE reduce to Newton's law of gravitation in the limit of a weak gravitational field and velocities that are much less than the speed of light.
Exact solutions for the EFE can only be found under simplifying assumptions such as symmetry. Special classes of exact solutions are most often studied since they model many gravitational phenomena, such as rotating black holes and the expanding universe. Further simplification is achieved in approximating the spacetime as having only small deviations from flat spacetime, leading to the linearized EFE. These equations are used to study phenomena such as gravitational waves.
== Mathematical form ==
The Einstein field equations (EFE) may be written in the form:
G
μ
ν
+
Λ
g
μ
ν
=
κ
T
μ
ν
,
{\displaystyle G_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu },}
where Gμν is the Einstein tensor, gμν is the metric tensor, Tμν is the stress–energy tensor, Λ is the cosmological constant and κ is the Einstein gravitational constant.
The Einstein tensor is defined as
G
μ
ν
=
R
μ
ν
−
1
2
R
g
μ
ν
,
{\displaystyle G_{\mu \nu }=R_{\mu \nu }-{\frac {1}{2}}Rg_{\mu \nu },}
where Rμν is the Ricci curvature tensor, and R is the scalar curvature. This is a symmetric second-degree tensor that depends on only the metric tensor and its first and second derivatives.
The Einstein gravitational constant is defined as
κ
=
8
π
G
c
4
≈
2.07665
×
10
−
43
N
−
1
,
{\displaystyle \kappa ={\frac {8\pi G}{c^{4}}}\approx 2.07665\times 10^{-43}\,{\textrm {N}}^{-1},}
where G is the Newtonian constant of gravitation and c is the speed of light in vacuum.
The EFE can thus also be written as
R
μ
ν
−
1
2
R
g
μ
ν
+
Λ
g
μ
ν
=
κ
T
μ
ν
.
{\displaystyle R_{\mu \nu }-{\frac {1}{2}}Rg_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu }.}
In standard units, each term on the left has quantity dimension of L−2.
The expression on the left represents the curvature of spacetime as determined by the metric; the expression on the right represents the stress–energy–momentum content of spacetime. The EFE can then be interpreted as a set of equations dictating how stress–energy–momentum determines the curvature of spacetime.
These equations, together with the geodesic equation, which dictates how freely falling matter moves through spacetime, form the core of the mathematical formulation of general relativity.
The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge-fixing degrees of freedom, which correspond to the freedom to choose a coordinate system.
Although the Einstein field equations were initially formulated in the context of a four-dimensional theory, some theorists have explored their consequences in n dimensions. The equations in contexts outside of general relativity are still referred to as the Einstein field equations. The vacuum field equations (obtained when Tμν is everywhere zero) define Einstein manifolds.
The equations are more complex than they appear. Given a specified distribution of matter and energy in the form of a stress–energy tensor, the EFE are understood to be equations for the metric tensor gμν, since both the Ricci tensor and scalar curvature depend on the metric in a complicated nonlinear manner. When fully written out, the EFE are a system of ten coupled, nonlinear, hyperbolic-elliptic partial differential equations.
=== Sign convention ===
The above form of the EFE is the standard established by Misner, Thorne, and Wheeler (MTW). The authors analyzed conventions that exist and classified these according to three signs ([S1] [S2] [S3]):
g
μ
ν
=
[
S
1
]
×
diag
(
−
1
,
+
1
,
+
1
,
+
1
)
R
μ
α
β
γ
=
[
S
2
]
×
(
Γ
α
γ
,
β
μ
−
Γ
α
β
,
γ
μ
+
Γ
σ
β
μ
Γ
γ
α
σ
−
Γ
σ
γ
μ
Γ
β
α
σ
)
G
μ
ν
=
[
S
3
]
×
κ
T
μ
ν
{\displaystyle {\begin{aligned}g_{\mu \nu }&=[S1]\times \operatorname {diag} (-1,+1,+1,+1)\\[6pt]{R^{\mu }}_{\alpha \beta \gamma }&=[S2]\times \left(\Gamma _{\alpha \gamma ,\beta }^{\mu }-\Gamma _{\alpha \beta ,\gamma }^{\mu }+\Gamma _{\sigma \beta }^{\mu }\Gamma _{\gamma \alpha }^{\sigma }-\Gamma _{\sigma \gamma }^{\mu }\Gamma _{\beta \alpha }^{\sigma }\right)\\[6pt]G_{\mu \nu }&=[S3]\times \kappa T_{\mu \nu }\end{aligned}}}
The third sign above is related to the choice of convention for the Ricci tensor:
R
μ
ν
=
[
S
2
]
×
[
S
3
]
×
R
α
μ
α
ν
{\displaystyle R_{\mu \nu }=[S2]\times [S3]\times {R^{\alpha }}_{\mu \alpha \nu }}
With these definitions Misner, Thorne, and Wheeler classify themselves as (+ + +), whereas Weinberg (1972) is (+ − −), Peebles (1980) and Efstathiou et al. (1990) are (− + +), Rindler (1977), Atwater (1974), Collins Martin & Squires (1989) and Peacock (1999) are (− + −).
Authors including Einstein have used a different sign in their definition for the Ricci tensor which results in the sign of the constant on the right side being negative:
R
μ
ν
−
1
2
R
g
μ
ν
−
Λ
g
μ
ν
=
−
κ
T
μ
ν
.
{\displaystyle R_{\mu \nu }-{\frac {1}{2}}Rg_{\mu \nu }-\Lambda g_{\mu \nu }=-\kappa T_{\mu \nu }.}
The sign of the cosmological term would change in both these versions if the (+ − − −) metric sign convention is used rather than the MTW (− + + +) metric sign convention adopted here.
=== Equivalent formulations ===
Taking the trace with respect to the metric of both sides of the EFE one gets
R
−
D
2
R
+
D
Λ
=
κ
T
,
{\displaystyle R-{\frac {D}{2}}R+D\Lambda =\kappa T,}
where D is the spacetime dimension. Solving for R and substituting this in the original EFE, one gets the following equivalent "trace-reversed" form:
R
μ
ν
−
2
D
−
2
Λ
g
μ
ν
=
κ
(
T
μ
ν
−
1
D
−
2
T
g
μ
ν
)
.
{\displaystyle R_{\mu \nu }-{\frac {2}{D-2}}\Lambda g_{\mu \nu }=\kappa \left(T_{\mu \nu }-{\frac {1}{D-2}}Tg_{\mu \nu }\right).}
In D = 4 dimensions this reduces to
R
μ
ν
−
Λ
g
μ
ν
=
κ
(
T
μ
ν
−
1
2
T
g
μ
ν
)
.
{\displaystyle R_{\mu \nu }-\Lambda g_{\mu \nu }=\kappa \left(T_{\mu \nu }-{\frac {1}{2}}T\,g_{\mu \nu }\right).}
Reversing the trace again would restore the original EFE. The trace-reversed form may be more convenient in some cases (for example, when one is interested in weak-field limit and can replace gμν in the expression on the right with the Minkowski metric without significant loss of accuracy).
== Cosmological constant ==
In the Einstein field equations
G
μ
ν
+
Λ
g
μ
ν
=
κ
T
μ
ν
,
{\displaystyle G_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu }\,,}
the term containing the cosmological constant Λ was absent from the version in which he originally published them. Einstein then included the term with the cosmological constant to allow for a universe that is not expanding or contracting. This effort was unsuccessful because:
any desired steady state solution described by this equation is unstable, and
observations by Edwin Hubble showed that our universe is expanding.
Einstein then abandoned Λ, remarking to George Gamow "that the introduction of the cosmological term was the biggest blunder of his life".
The inclusion of this term does not create inconsistencies. For many years the cosmological constant was almost universally assumed to be zero. More recent astronomical observations have shown an accelerating expansion of the universe, and to explain this a positive value of Λ is needed. The effect of the cosmological constant is negligible at the scale of a galaxy or smaller.
Einstein thought of the cosmological constant as an independent parameter, but its term in the field equation can also be moved algebraically to the other side and incorporated as part of the stress–energy tensor:
T
μ
ν
(
v
a
c
)
=
−
Λ
κ
g
μ
ν
.
{\displaystyle T_{\mu \nu }^{\mathrm {(vac)} }=-{\frac {\Lambda }{\kappa }}g_{\mu \nu }\,.}
This tensor describes a vacuum state with an energy density ρvac and isotropic pressure pvac that are fixed constants and given by
ρ
v
a
c
=
−
p
v
a
c
=
Λ
κ
,
{\displaystyle \rho _{\mathrm {vac} }=-p_{\mathrm {vac} }={\frac {\Lambda }{\kappa }},}
where it is assumed that Λ has SI unit m−2 and κ is defined as above.
The existence of a cosmological constant is thus equivalent to the existence of a vacuum energy and a pressure of opposite sign. This has led to the terms "cosmological constant" and "vacuum energy" being used interchangeably in general relativity.
== Features ==
=== Conservation of energy and momentum ===
General relativity is consistent with the local conservation of energy and momentum expressed as
∇
β
T
α
β
=
T
α
β
;
β
=
0.
{\displaystyle \nabla _{\beta }T^{\alpha \beta }={T^{\alpha \beta }}_{;\beta }=0.}
which expresses the local conservation of stress–energy. This conservation law is a physical requirement. With his field equations Einstein ensured that general relativity is consistent with this conservation condition.
=== Nonlinearity ===
The nonlinearity of the EFE distinguishes general relativity from many other fundamental physical theories. For example, Maxwell's equations of electromagnetism are linear in the electric and magnetic fields, and charge and current distributions (i.e. the sum of two solutions is also a solution); another example is the Schrödinger equation of quantum mechanics, which is linear in the wavefunction.
=== Correspondence principle ===
The EFE reduce to Newton's law of gravity by using both the weak-field approximation and the low-velocity approximation. The constant G appearing in the EFE is determined by making these two approximations.
== Vacuum field equations ==
If the energy–momentum tensor Tμν is zero in the region under consideration, then the field equations are also referred to as the vacuum field equations. By setting Tμν = 0 in the trace-reversed field equations, the vacuum field equations, also known as 'Einstein vacuum equations' (EVE), can be written as
R
μ
ν
=
0
.
{\displaystyle R_{\mu \nu }=0\,.}
In the case of nonzero cosmological constant, the equations are
R
μ
ν
=
Λ
D
2
−
1
g
μ
ν
.
{\displaystyle R_{\mu \nu }={\frac {\Lambda }{{\frac {D}{2}}-1}}g_{\mu \nu }\,.}
The solutions to the vacuum field equations are called vacuum solutions. Flat Minkowski space is the simplest example of a vacuum solution. Nontrivial examples include the Schwarzschild solution and the Kerr solution.
Manifolds with a vanishing Ricci tensor, Rμν = 0, are referred to as Ricci-flat manifolds and manifolds with a Ricci tensor proportional to the metric as Einstein manifolds.
== Einstein–Maxwell equations ==
If the energy–momentum tensor Tμν is that of an electromagnetic field in free space, i.e. if the electromagnetic stress–energy tensor
T
α
β
=
−
1
μ
0
(
F
α
ψ
F
ψ
β
+
1
4
g
α
β
F
ψ
τ
F
ψ
τ
)
{\displaystyle T^{\alpha \beta }=\,-{\frac {1}{\mu _{0}}}\left({F^{\alpha }}^{\psi }{F_{\psi }}^{\beta }+{\tfrac {1}{4}}g^{\alpha \beta }F_{\psi \tau }F^{\psi \tau }\right)}
is used, then the Einstein field equations are called the Einstein–Maxwell equations (with cosmological constant Λ, taken to be zero in conventional relativity theory):
G
α
β
+
Λ
g
α
β
=
κ
μ
0
(
F
α
ψ
F
ψ
β
+
1
4
g
α
β
F
ψ
τ
F
ψ
τ
)
.
{\displaystyle G^{\alpha \beta }+\Lambda g^{\alpha \beta }={\frac {\kappa }{\mu _{0}}}\left({F^{\alpha }}^{\psi }{F_{\psi }}^{\beta }+{\tfrac {1}{4}}g^{\alpha \beta }F_{\psi \tau }F^{\psi \tau }\right).}
Additionally, the covariant Maxwell equations are also applicable in free space:
F
α
β
;
β
=
0
F
[
α
β
;
γ
]
=
1
3
(
F
α
β
;
γ
+
F
β
γ
;
α
+
F
γ
α
;
β
)
=
1
3
(
F
α
β
,
γ
+
F
β
γ
,
α
+
F
γ
α
,
β
)
=
0
,
{\displaystyle {\begin{aligned}{F^{\alpha \beta }}_{;\beta }&=0\\F_{[\alpha \beta ;\gamma ]}&={\tfrac {1}{3}}\left(F_{\alpha \beta ;\gamma }+F_{\beta \gamma ;\alpha }+F_{\gamma \alpha ;\beta }\right)={\tfrac {1}{3}}\left(F_{\alpha \beta ,\gamma }+F_{\beta \gamma ,\alpha }+F_{\gamma \alpha ,\beta }\right)=0,\end{aligned}}}
where the semicolon represents a covariant derivative, and the brackets denote anti-symmetrization. The first equation asserts that the 4-divergence of the 2-form F is zero, and the second that its exterior derivative is zero. From the latter, it follows by the Poincaré lemma that in a coordinate chart it is possible to introduce an electromagnetic field potential Aα such that
F
α
β
=
A
α
;
β
−
A
β
;
α
=
A
α
,
β
−
A
β
,
α
{\displaystyle F_{\alpha \beta }=A_{\alpha ;\beta }-A_{\beta ;\alpha }=A_{\alpha ,\beta }-A_{\beta ,\alpha }}
in which the comma denotes a partial derivative. This is often taken as equivalent to the covariant Maxwell equation from which it is derived. However, there are global solutions of the equation that may lack a globally defined potential.
== Solutions ==
The solutions of the Einstein field equations are metrics of spacetime. These metrics describe the structure of the spacetime including the inertial motion of objects in the spacetime. As the field equations are non-linear, they cannot always be completely solved (i.e. without making approximations). For example, there is no known complete solution for a spacetime with two massive bodies in it (which is a theoretical model of a binary star system, for example). However, approximations are usually made in these cases. These are commonly referred to as post-Newtonian approximations. Even so, there are several cases where the field equations have been solved completely, and those are called exact solutions.
The study of exact solutions of Einstein's field equations is one of the activities of cosmology. It leads to the prediction of black holes and to different models of evolution of the universe.
One can also discover new solutions of the Einstein field equations via the method of orthonormal frames as pioneered by Ellis and MacCallum. In this approach, the Einstein field equations are reduced to a set of coupled, nonlinear, ordinary differential equations. As discussed by Hsu and Wainwright, self-similar solutions to the Einstein field equations are fixed points of the resulting dynamical system. New solutions have been discovered using these methods by LeBlanc and Kohli and Haslam.
== Linearized EFE ==
The nonlinearity of the EFE makes finding exact solutions difficult. One way of solving the field equations is to make an approximation, namely, that far from the source(s) of gravitating matter, the gravitational field is very weak and the spacetime approximates that of Minkowski space. The metric is then written as the sum of the Minkowski metric and a term representing the deviation of the true metric from the Minkowski metric, ignoring higher-power terms. This linearization procedure can be used to investigate the phenomena of gravitational radiation.
== Polynomial form ==
Despite the EFE as written containing the inverse of the metric tensor, they can be arranged in a form that contains the metric tensor in polynomial form and without its inverse. First, the determinant of the metric in 4 dimensions can be written
det
(
g
)
=
1
24
ε
α
β
γ
δ
ε
κ
λ
μ
ν
g
α
κ
g
β
λ
g
γ
μ
g
δ
ν
{\displaystyle \det(g)={\tfrac {1}{24}}\varepsilon ^{\alpha \beta \gamma \delta }\varepsilon ^{\kappa \lambda \mu \nu }g_{\alpha \kappa }g_{\beta \lambda }g_{\gamma \mu }g_{\delta \nu }}
using the Levi-Civita symbol; and the inverse of the metric in 4 dimensions can be written as:
g
α
κ
=
1
6
ε
α
β
γ
δ
ε
κ
λ
μ
ν
g
β
λ
g
γ
μ
g
δ
ν
det
(
g
)
.
{\displaystyle g^{\alpha \kappa }={\frac {{\tfrac {1}{6}}\varepsilon ^{\alpha \beta \gamma \delta }\varepsilon ^{\kappa \lambda \mu \nu }g_{\beta \lambda }g_{\gamma \mu }g_{\delta \nu }}{\det(g)}}\,.}
Substituting this expression of the inverse of the metric into the equations then multiplying both sides by a suitable power of det(g) to eliminate it from the denominator results in polynomial equations in the metric tensor and its first and second derivatives. The Einstein–Hilbert action from which the equations are derived can also be written in polynomial form by suitable redefinitions of the fields.
== See also ==
== Notes ==
== References ==
See General relativity resources.
Misner, Charles W.; Thorne, Kip S.; Wheeler, John Archibald (1973). Gravitation. San Francisco: W. H. Freeman. ISBN 978-0-7167-0344-0.
Weinberg, Steven (1972). Gravitation and Cosmology. John Wiley & Sons. ISBN 0-471-92567-5.
Peacock, John A. (1999). Cosmological Physics. Cambridge University Press. ISBN 978-0521410724.
== External links ==
"Einstein equations", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Caltech Tutorial on Relativity — A simple introduction to Einstein's Field Equations.
The Meaning of Einstein's Equation — An explanation of Einstein's field equation, its derivation, and some of its consequences
Video Lecture on Einstein's Field Equations by MIT Physics Professor Edmund Bertschinger.
Arch and scaffold: How Einstein found his field equations Physics Today November 2015, History of the Development of the Field Equations
=== External images ===
The Einstein field equation on the wall of the Museum Boerhaave in downtown Leiden
Suzanne Imber, "The impact of general relativity on the Atacama Desert", Einstein field equation on the side of a train in Bolivia. | Wikipedia/Einstein_equation |
In physics, specifically field theory and particle physics, the Proca action describes a massive spin-1 field of mass m in Minkowski spacetime. The corresponding equation is a relativistic wave equation called the Proca equation. The Proca action and equation are named after Romanian physicist Alexandru Proca.
The Proca equation is involved in the Standard Model and describes there the three massive vector bosons, i.e. the Z and W bosons.
This article uses the (+−−−) metric signature and tensor index notation in the language of 4-vectors.
== Lagrangian density ==
The field involved is a complex 4-potential
B
μ
=
(
ϕ
c
,
A
)
{\displaystyle B^{\mu }=\left({\frac {\phi }{c}},\mathbf {A} \right)}
, where
ϕ
{\displaystyle \phi }
is a kind of generalized electric potential and
A
{\displaystyle \mathbf {A} }
is a generalized magnetic potential. The field
B
μ
{\displaystyle B^{\mu }}
transforms like a complex four-vector.
The Lagrangian density is given by:
L
=
−
1
2
(
∂
μ
B
ν
∗
−
∂
ν
B
μ
∗
)
(
∂
μ
B
ν
−
∂
ν
B
μ
)
+
m
2
c
2
ℏ
2
B
ν
∗
B
ν
.
{\displaystyle {\mathcal {L}}=-{\frac {1}{2}}(\partial _{\mu }B_{\nu }^{*}-\partial _{\nu }B_{\mu }^{*})(\partial ^{\mu }B^{\nu }-\partial ^{\nu }B^{\mu })+{\frac {m^{2}c^{2}}{\hbar ^{2}}}B_{\nu }^{*}B^{\nu }.}
where
c
{\displaystyle c}
is the speed of light in vacuum,
ℏ
{\displaystyle \hbar }
is the reduced Planck constant, and
∂
μ
{\displaystyle \partial _{\mu }}
is the 4-gradient.
== Equation ==
The Euler–Lagrange equation of motion for this case, also called the Proca equation, is:
∂
μ
(
∂
μ
B
ν
−
∂
ν
B
μ
)
+
(
m
c
ℏ
)
2
B
ν
=
0
{\displaystyle \partial _{\mu }{\Bigl (}\ \partial ^{\mu }B^{\nu }-\partial ^{\nu }B^{\mu }\ {\Bigr )}+\left({\frac {\ m\ c\ }{\hbar }}\right)^{2}B^{\nu }=0}
which is conjugate equivalent to
[
∂
μ
∂
μ
+
(
m
c
ℏ
)
2
]
B
ν
=
0
{\displaystyle \left[\ \partial _{\mu }\partial ^{\mu }+\left({\frac {\ m\ c\ }{\hbar }}\right)^{2}\ \right]B^{\nu }=0}
and with
m
=
0
{\displaystyle \ m=0\ }
(the massless case) reduces to
∂
ν
B
ν
=
0
,
{\displaystyle \ \partial _{\nu }B^{\nu }=0\ ,}
which may be called a generalized Lorenz gauge condition. For non-zero sources, with all fundamental constants included, the field equation is:
c
μ
0
j
ν
=
[
g
μ
ν
(
∂
σ
∂
σ
+
m
2
c
2
ℏ
2
)
−
∂
ν
∂
μ
]
B
μ
{\displaystyle c\ \mu _{0}\ j^{\nu }\;=\;\left[\ g^{\mu \nu }\left(\partial _{\sigma }\partial ^{\sigma }+{\frac {\ m^{2}\ c^{2}\ }{\ \hbar ^{2}}}\right)-\partial ^{\nu }\partial ^{\mu }\ \right]B_{\mu }\ }
When
m
=
0
,
{\displaystyle \ m=0\ ,}
the source free equations reduce to Maxwell's equations without charge or current, and the above reduces to Maxwell's charge equation. This Proca field equation is closely related to the Klein–Gordon equation, because it is second order in space and time.
In the vector calculus notation, the source free equations are:
◻
ϕ
−
∂
∂
t
(
1
c
2
∂
ϕ
∂
t
+
∇
⋅
A
)
=
−
(
m
c
ℏ
)
2
ϕ
{\displaystyle \ \Box \ \phi -{\frac {\ \partial }{\partial t}}\left({\frac {1}{\ c^{2}}}{\frac {\ \partial \phi \ }{\partial t}}+\nabla \cdot \mathbf {A} \right)~=~-\left({\frac {\ m\ c\ }{\hbar }}\right)^{2}\phi \ }
◻
A
+
∇
(
1
c
2
∂
ϕ
∂
t
+
∇
⋅
A
)
=
−
(
m
c
ℏ
)
2
A
{\displaystyle \ \Box \ \mathbf {A} +\nabla \left({\frac {1}{\ c^{2}}}\ {\frac {\ \partial \phi \ }{\partial t}}+\nabla \cdot \mathbf {A} \right)~=~-\left({\frac {\ m\ c\ }{\hbar }}\right)^{2}\mathbf {A} \ }
and
◻
{\displaystyle \ \Box \ }
is the D'Alembert operator.
== Gauge fixing ==
The Proca action is the gauge-fixed version of the Stueckelberg action via the Higgs mechanism. Quantizing the Proca action requires the use of second class constraints.
If
m
≠
0
,
{\displaystyle \ m\neq 0\ ,}
they are not invariant under the gauge transformations of electromagnetism
B
μ
↦
B
μ
−
∂
μ
f
{\displaystyle \ B^{\mu }\mapsto B^{\mu }-\partial ^{\mu }f\ }
where
f
{\displaystyle \ f\ }
is an arbitrary function.
== See also ==
Electromagnetic field
Photon
Quantum electrodynamics
Quantum gravity
Vector boson
Relativistic wave equations
Klein-Gordon equation (spin 0)
Dirac equation (spin 1/2)
== References ==
== Further reading ==
Supersymmetry Demystified, P. Labelle, McGraw–Hill (USA), 2010, ISBN 978-0-07-163641-4
Quantum Field Theory, D. McMahon, Mc Graw Hill (USA), 2008, ISBN 978-0-07-154382-8
Quantum Mechanics Demystified, D. McMahon, Mc Graw Hill (USA), 2006, ISBN 0-07-145546 9 | Wikipedia/Proca_equation |
An operator is a function over a space of physical states onto another space of states. The simplest example of the utility of operators is the study of symmetry (which makes the concept of a group useful in this context). Because of this, they are useful tools in classical mechanics. Operators are even more important in quantum mechanics, where they form an intrinsic part of the formulation of the theory. They play a central role in describing observables (measurable quantities like energy, momentum, etc.).
== Operators in classical mechanics ==
In classical mechanics, the movement of a particle (or system of particles) is completely determined by the Lagrangian
L
(
q
,
q
˙
,
t
)
{\displaystyle L(q,{\dot {q}},t)}
or equivalently the Hamiltonian
H
(
q
,
p
,
t
)
{\displaystyle H(q,p,t)}
, a function of the generalized coordinates q, generalized velocities
q
˙
=
d
q
/
d
t
{\displaystyle {\dot {q}}=\mathrm {d} q/\mathrm {d} t}
and its conjugate momenta:
p
=
∂
L
∂
q
˙
{\displaystyle p={\frac {\partial L}{\partial {\dot {q}}}}}
If either L or H is independent of a generalized coordinate q, meaning the L and H do not change when q is changed, which in turn means the dynamics of the particle are still the same even when q changes, the corresponding momenta conjugate to those coordinates will be conserved (this is part of Noether's theorem, and the invariance of motion with respect to the coordinate q is a symmetry). Operators in classical mechanics are related to these symmetries.
More technically, when H is invariant under the action of a certain group of transformations G:
S
∈
G
,
H
(
S
(
q
,
p
)
)
=
H
(
q
,
p
)
{\displaystyle S\in G,H(S(q,p))=H(q,p)}
.
The elements of G are physical operators, which map physical states among themselves.
=== Table of classical mechanics operators ===
where
R
(
n
^
,
θ
)
{\displaystyle R({\hat {\boldsymbol {n}}},\theta )}
is the rotation matrix about an axis defined by the unit vector
n
^
{\displaystyle {\hat {\boldsymbol {n}}}}
and angle θ.
== Generators ==
If the transformation is infinitesimal, the operator action should be of the form
I
+
ϵ
A
,
{\displaystyle I+\epsilon A,}
where
I
{\displaystyle I}
is the identity operator,
ϵ
{\displaystyle \epsilon }
is a parameter with a small value, and
A
{\displaystyle A}
will depend on the transformation at hand, and is called a generator of the group. Again, as a simple example, we will derive the generator of the space translations on 1D functions.
As it was stated,
T
a
f
(
x
)
=
f
(
x
−
a
)
{\displaystyle T_{a}f(x)=f(x-a)}
. If
a
=
ϵ
{\displaystyle a=\epsilon }
is infinitesimal, then we may write
T
ϵ
f
(
x
)
=
f
(
x
−
ϵ
)
≈
f
(
x
)
−
ϵ
f
′
(
x
)
.
{\displaystyle T_{\epsilon }f(x)=f(x-\epsilon )\approx f(x)-\epsilon f'(x).}
This formula may be rewritten as
T
ϵ
f
(
x
)
=
(
I
−
ϵ
D
)
f
(
x
)
{\displaystyle T_{\epsilon }f(x)=(I-\epsilon D)f(x)}
where
D
{\displaystyle D}
is the generator of the translation group, which in this case happens to be the derivative operator. Thus, it is said that the generator of translations is the derivative.
== The exponential map ==
The whole group may be recovered, under normal circumstances, from the generators, via the exponential map. In the case of the translations the idea works like this.
The translation for a finite value of
a
{\displaystyle a}
may be obtained by repeated application of the infinitesimal translation:
T
a
f
(
x
)
=
lim
N
→
∞
T
a
/
N
⋯
T
a
/
N
f
(
x
)
{\displaystyle T_{a}f(x)=\lim _{N\to \infty }T_{a/N}\cdots T_{a/N}f(x)}
with the
⋯
{\displaystyle \cdots }
standing for the application
N
{\displaystyle N}
times. If
N
{\displaystyle N}
is large, each of the factors may be considered to be infinitesimal:
T
a
f
(
x
)
=
lim
N
→
∞
(
I
−
a
N
D
)
N
f
(
x
)
.
{\displaystyle T_{a}f(x)=\lim _{N\to \infty }\left(I-{\frac {a}{N}}D\right)^{N}f(x).}
But this limit may be rewritten as an exponential:
T
a
f
(
x
)
=
exp
(
−
a
D
)
f
(
x
)
.
{\displaystyle T_{a}f(x)=\exp(-aD)f(x).}
To be convinced of the validity of this formal expression, we may expand the exponential in a power series:
T
a
f
(
x
)
=
(
I
−
a
D
+
a
2
D
2
2
!
−
a
3
D
3
3
!
+
⋯
)
f
(
x
)
.
{\displaystyle T_{a}f(x)=\left(I-aD+{a^{2}D^{2} \over 2!}-{a^{3}D^{3} \over 3!}+\cdots \right)f(x).}
The right-hand side may be rewritten as
f
(
x
)
−
a
f
′
(
x
)
+
a
2
2
!
f
″
(
x
)
−
a
3
3
!
f
(
3
)
(
x
)
+
⋯
{\displaystyle f(x)-af'(x)+{\frac {a^{2}}{2!}}f''(x)-{\frac {a^{3}}{3!}}f^{(3)}(x)+\cdots }
which is just the Taylor expansion of
f
(
x
−
a
)
{\displaystyle f(x-a)}
, which was our original value for
T
a
f
(
x
)
{\displaystyle T_{a}f(x)}
.
The mathematical properties of physical operators are a topic of great importance in itself. For further information, see C*-algebra and Gelfand–Naimark theorem.
== Operators in quantum mechanics ==
The mathematical formulation of quantum mechanics (QM) is built upon the concept of an operator.
Physical pure states in quantum mechanics are represented as unit-norm vectors (probabilities are normalized to one) in a special complex Hilbert space. Time evolution in this vector space is given by the application of the evolution operator.
Any observable, i.e., any quantity which can be measured in a physical experiment, should be associated with a self-adjoint linear operator. The operators must yield real eigenvalues, since they are values which may come up as the result of the experiment. Mathematically this means the operators must be Hermitian. The probability of each eigenvalue is related to the projection of the physical state on the subspace related to that eigenvalue. See below for mathematical details about Hermitian operators.
In the wave mechanics formulation of QM, the wavefunction varies with space and time, or equivalently momentum and time (see position and momentum space for details), so observables are differential operators.
In the matrix mechanics formulation, the norm of the physical state should stay fixed, so the evolution operator should be unitary, and the operators can be represented as matrices. Any other symmetry, mapping a physical state into another, should keep this restriction.
=== Wavefunction ===
The wavefunction must be square-integrable (see Lp spaces), meaning:
∭
R
3
|
ψ
(
r
)
|
2
d
3
r
=
∭
R
3
ψ
(
r
)
∗
ψ
(
r
)
d
3
r
<
∞
{\displaystyle \iiint _{\mathbb {R} ^{3}}|\psi (\mathbf {r} )|^{2}\,d^{3}\mathbf {r} =\iiint _{\mathbb {R} ^{3}}\psi (\mathbf {r} )^{*}\psi (\mathbf {r} )\,d^{3}\mathbf {r} <\infty }
and normalizable, so that:
∭
R
3
|
ψ
(
r
)
|
2
d
3
r
=
1
{\displaystyle \iiint _{\mathbb {R} ^{3}}|\psi (\mathbf {r} )|^{2}\,d^{3}\mathbf {r} =1}
Two cases of eigenstates (and eigenvalues) are:
for discrete eigenstates
|
ψ
i
⟩
{\displaystyle |\psi _{i}\rangle }
forming a discrete basis, so any state is a sum
|
ψ
⟩
=
∑
i
c
i
|
ϕ
i
⟩
{\displaystyle |\psi \rangle =\sum _{i}c_{i}|\phi _{i}\rangle }
where ci are complex numbers such that |ci|2 = ci*ci is the probability of measuring the state
|
ϕ
i
⟩
{\displaystyle |\phi _{i}\rangle }
, and the corresponding set of eigenvalues ai is also discrete - either finite or countably infinite. In this case, the inner product of two eigenstates is given by
⟨
ϕ
i
|
ϕ
j
⟩
=
δ
i
j
{\displaystyle \langle \phi _{i}\vert \phi _{j}\rangle =\delta _{ij}}
, where
δ
m
n
{\displaystyle \delta _{mn}}
denotes the Kronecker Delta. However,
for a continuum of eigenstates forming a continuous basis, any state is an integral
|
ψ
⟩
=
∫
c
(
ϕ
)
d
ϕ
|
ϕ
⟩
{\displaystyle |\psi \rangle =\int c(\phi )\,d\phi |\phi \rangle }
where c(φ) is a complex function such that |c(φ)|2 = c(φ)*c(φ) is the probability of measuring the state
|
ϕ
⟩
{\displaystyle |\phi \rangle }
, and there is an uncountably infinite set of eigenvalues a. In this case, the inner product of two eigenstates is defined as
⟨
ϕ
′
|
ϕ
⟩
=
δ
(
ϕ
−
ϕ
′
)
{\displaystyle \langle \phi '\vert \phi \rangle =\delta (\phi -\phi ')}
, where here
δ
(
x
−
y
)
{\displaystyle \delta (x-y)}
denotes the Dirac Delta.
=== Linear operators in wave mechanics ===
Let ψ be the wavefunction for a quantum system, and
A
^
{\displaystyle {\hat {A}}}
be any linear operator for some observable A (such as position, momentum, energy, angular momentum etc.). If ψ is an eigenfunction of the operator
A
^
{\displaystyle {\hat {A}}}
, then
A
^
ψ
=
a
ψ
,
{\displaystyle {\hat {A}}\psi =a\psi ,}
where a is the eigenvalue of the operator, corresponding to the measured value of the observable, i.e. observable A has a measured value a.
If ψ is an eigenfunction of a given operator
A
^
{\displaystyle {\hat {A}}}
, then a definite quantity (the eigenvalue a) will be observed if a measurement of the observable A is made on the state ψ. Conversely, if ψ is not an eigenfunction of
A
^
{\displaystyle {\hat {A}}}
, then it has no eigenvalue for
A
^
{\displaystyle {\hat {A}}}
, and the observable does not have a single definite value in that case. Instead, measurements of the observable A will yield each eigenvalue with a certain probability (related to the decomposition of ψ relative to the orthonormal eigenbasis of
A
^
{\displaystyle {\hat {A}}}
).
In bra–ket notation the above can be written;
A
^
ψ
=
A
^
ψ
(
r
)
=
A
^
⟨
r
∣
ψ
⟩
=
⟨
r
|
A
^
|
ψ
⟩
a
ψ
=
a
ψ
(
r
)
=
a
⟨
r
∣
ψ
⟩
=
⟨
r
∣
a
∣
ψ
⟩
{\displaystyle {\begin{aligned}{\hat {A}}\psi &={\hat {A}}\psi (\mathbf {r} )={\hat {A}}\left\langle \mathbf {r} \mid \psi \right\rangle =\left\langle \mathbf {r} \left\vert {\hat {A}}\right\vert \psi \right\rangle \\a\psi &=a\psi (\mathbf {r} )=a\left\langle \mathbf {r} \mid \psi \right\rangle =\left\langle \mathbf {r} \mid a\mid \psi \right\rangle \\\end{aligned}}}
that are equal if
|
ψ
⟩
{\displaystyle \left|\psi \right\rangle }
is an eigenvector, or eigenket of the observable A.
Due to linearity, vectors can be defined in any number of dimensions, as each component of the vector acts on the function separately. One mathematical example is the del operator, which is itself a vector (useful in momentum-related quantum operators, in the table below).
An operator in n-dimensional space can be written:
A
^
=
∑
j
=
1
n
e
j
A
^
j
{\displaystyle \mathbf {\hat {A}} =\sum _{j=1}^{n}\mathbf {e} _{j}{\hat {A}}_{j}}
where ej are basis vectors corresponding to each component operator Aj. Each component will yield a corresponding eigenvalue
a
j
{\displaystyle a_{j}}
. Acting this on the wave function ψ:
A
^
ψ
=
(
∑
j
=
1
n
e
j
A
^
j
)
ψ
=
∑
j
=
1
n
(
e
j
A
^
j
ψ
)
=
∑
j
=
1
n
(
e
j
a
j
ψ
)
{\displaystyle \mathbf {\hat {A}} \psi =\left(\sum _{j=1}^{n}\mathbf {e} _{j}{\hat {A}}_{j}\right)\psi =\sum _{j=1}^{n}\left(\mathbf {e} _{j}{\hat {A}}_{j}\psi \right)=\sum _{j=1}^{n}\left(\mathbf {e} _{j}a_{j}\psi \right)}
in which we have used
A
^
j
ψ
=
a
j
ψ
.
{\displaystyle {\hat {A}}_{j}\psi =a_{j}\psi .}
In bra–ket notation:
A
^
ψ
=
A
^
ψ
(
r
)
=
A
^
⟨
r
∣
ψ
⟩
=
⟨
r
|
A
^
|
ψ
⟩
(
∑
j
=
1
n
e
j
A
^
j
)
ψ
=
(
∑
j
=
1
n
e
j
A
^
j
)
ψ
(
r
)
=
(
∑
j
=
1
n
e
j
A
^
j
)
⟨
r
∣
ψ
⟩
=
⟨
r
|
∑
j
=
1
n
e
j
A
^
j
|
ψ
⟩
{\displaystyle {\begin{aligned}\mathbf {\hat {A}} \psi =\mathbf {\hat {A}} \psi (\mathbf {r} )=\mathbf {\hat {A}} \left\langle \mathbf {r} \mid \psi \right\rangle &=\left\langle \mathbf {r} \left\vert \mathbf {\hat {A}} \right\vert \psi \right\rangle \\\left(\sum _{j=1}^{n}\mathbf {e} _{j}{\hat {A}}_{j}\right)\psi =\left(\sum _{j=1}^{n}\mathbf {e} _{j}{\hat {A}}_{j}\right)\psi (\mathbf {r} )=\left(\sum _{j=1}^{n}\mathbf {e} _{j}{\hat {A}}_{j}\right)\left\langle \mathbf {r} \mid \psi \right\rangle &=\left\langle \mathbf {r} \left\vert \sum _{j=1}^{n}\mathbf {e} _{j}{\hat {A}}_{j}\right\vert \psi \right\rangle \end{aligned}}}
=== Commutation of operators on Ψ ===
If two observables A and B have linear operators
A
^
{\displaystyle {\hat {A}}}
and
B
^
{\displaystyle {\hat {B}}}
, the commutator is defined by,
[
A
^
,
B
^
]
=
A
^
B
^
−
B
^
A
^
{\displaystyle \left[{\hat {A}},{\hat {B}}\right]={\hat {A}}{\hat {B}}-{\hat {B}}{\hat {A}}}
The commutator is itself a (composite) operator. Acting the commutator on ψ gives:
[
A
^
,
B
^
]
ψ
=
A
^
B
^
ψ
−
B
^
A
^
ψ
.
{\displaystyle \left[{\hat {A}},{\hat {B}}\right]\psi ={\hat {A}}{\hat {B}}\psi -{\hat {B}}{\hat {A}}\psi .}
If ψ is an eigenfunction with eigenvalues a and b for observables A and B respectively, and if the operators commute:
[
A
^
,
B
^
]
ψ
=
0
,
{\displaystyle \left[{\hat {A}},{\hat {B}}\right]\psi =0,}
then the observables A and B can be measured simultaneously with infinite precision, i.e., uncertainties
Δ
A
=
0
{\displaystyle \Delta A=0}
,
Δ
B
=
0
{\displaystyle \Delta B=0}
simultaneously. ψ is then said to be the simultaneous eigenfunction of A and B. To illustrate this:
[
A
^
,
B
^
]
ψ
=
A
^
B
^
ψ
−
B
^
A
^
ψ
=
a
(
b
ψ
)
−
b
(
a
ψ
)
=
0.
{\displaystyle {\begin{aligned}\left[{\hat {A}},{\hat {B}}\right]\psi &={\hat {A}}{\hat {B}}\psi -{\hat {B}}{\hat {A}}\psi \\&=a(b\psi )-b(a\psi )\\&=0.\\\end{aligned}}}
It shows that measurement of A and B does not cause any shift of state, i.e., initial and final states are same (no disturbance due to measurement). Suppose we measure A to get value a. We then measure B to get the value b. We measure A again. We still get the same value a. Clearly the state (ψ) of the system is not destroyed and so we are able to measure A and B simultaneously with infinite precision.
If the operators do not commute:
[
A
^
,
B
^
]
ψ
≠
0
,
{\displaystyle \left[{\hat {A}},{\hat {B}}\right]\psi \neq 0,}
they cannot be prepared simultaneously to arbitrary precision, and there is an uncertainty relation between the observables
Δ
A
Δ
B
≥
|
1
2
⟨
[
A
,
B
]
⟩
|
{\displaystyle \Delta A\Delta B\geq \left|{\frac {1}{2}}\langle [A,B]\rangle \right|}
even if ψ is an eigenfunction the above relation holds. Notable pairs are position-and-momentum and energy-and-time uncertainty relations, and the angular momenta (spin, orbital and total) about any two orthogonal axes (such as Lx and Ly, or sy and sz, etc.).
=== Expectation values of operators on Ψ ===
The expectation value (equivalently the average or mean value) is the average measurement of an observable, for particle in region R. The expectation value
⟨
A
^
⟩
{\displaystyle \left\langle {\hat {A}}\right\rangle }
of the operator
A
^
{\displaystyle {\hat {A}}}
is calculated from:
⟨
A
^
⟩
=
∫
R
ψ
∗
(
r
)
A
^
ψ
(
r
)
d
3
r
=
⟨
ψ
|
A
^
|
ψ
⟩
.
{\displaystyle \left\langle {\hat {A}}\right\rangle =\int _{R}\psi ^{*}\left(\mathbf {r} \right){\hat {A}}\psi \left(\mathbf {r} \right)\mathrm {d} ^{3}\mathbf {r} =\left\langle \psi \left|{\hat {A}}\right|\psi \right\rangle .}
This can be generalized to any function F of an operator:
⟨
F
(
A
^
)
⟩
=
∫
R
ψ
(
r
)
∗
[
F
(
A
^
)
ψ
(
r
)
]
d
3
r
=
⟨
ψ
|
F
(
A
^
)
|
ψ
⟩
,
{\displaystyle \left\langle F\left({\hat {A}}\right)\right\rangle =\int _{R}\psi (\mathbf {r} )^{*}\left[F\left({\hat {A}}\right)\psi (\mathbf {r} )\right]\mathrm {d} ^{3}\mathbf {r} =\left\langle \psi \left|F\left({\hat {A}}\right)\right|\psi \right\rangle ,}
An example of F is the 2-fold action of A on ψ, i.e. squaring an operator or doing it twice:
F
(
A
^
)
=
A
^
2
⇒
⟨
A
^
2
⟩
=
∫
R
ψ
∗
(
r
)
A
^
2
ψ
(
r
)
d
3
r
=
⟨
ψ
|
A
^
2
|
ψ
⟩
{\displaystyle {\begin{aligned}F\left({\hat {A}}\right)&={\hat {A}}^{2}\\\Rightarrow \left\langle {\hat {A}}^{2}\right\rangle &=\int _{R}\psi ^{*}\left(\mathbf {r} \right){\hat {A}}^{2}\psi \left(\mathbf {r} \right)\mathrm {d} ^{3}\mathbf {r} =\left\langle \psi \left\vert {\hat {A}}^{2}\right\vert \psi \right\rangle \\\end{aligned}}\,\!}
=== Hermitian operators ===
The definition of a Hermitian operator is:
A
^
=
A
^
†
{\displaystyle {\hat {A}}={\hat {A}}^{\dagger }}
Following from this, in bra–ket notation:
⟨
ϕ
i
|
A
^
|
ϕ
j
⟩
=
⟨
ϕ
j
|
A
^
|
ϕ
i
⟩
∗
.
{\displaystyle \left\langle \phi _{i}\left|{\hat {A}}\right|\phi _{j}\right\rangle =\left\langle \phi _{j}\left|{\hat {A}}\right|\phi _{i}\right\rangle ^{*}.}
Important properties of Hermitian operators include:
real eigenvalues,
eigenvectors with different eigenvalues are orthogonal,
eigenvectors can be chosen to be a complete orthonormal basis,
=== Operators in matrix mechanics ===
An operator can be written in matrix form to map one basis vector to another. Since the operators are linear, the matrix is a linear transformation (aka transition matrix) between bases. Each basis element
ϕ
j
{\displaystyle \phi _{j}}
can be connected to another, by the expression:
A
i
j
=
⟨
ϕ
i
|
A
^
|
ϕ
j
⟩
,
{\displaystyle A_{ij}=\left\langle \phi _{i}\left|{\hat {A}}\right|\phi _{j}\right\rangle ,}
which is a matrix element:
A
^
=
(
A
11
A
12
⋯
A
1
n
A
21
A
22
⋯
A
2
n
⋮
⋮
⋱
⋮
A
n
1
A
n
2
⋯
A
n
n
)
{\displaystyle {\hat {A}}={\begin{pmatrix}A_{11}&A_{12}&\cdots &A_{1n}\\A_{21}&A_{22}&\cdots &A_{2n}\\\vdots &\vdots &\ddots &\vdots \\A_{n1}&A_{n2}&\cdots &A_{nn}\\\end{pmatrix}}}
A further property of a Hermitian operator is that eigenfunctions corresponding to different eigenvalues are orthogonal. In matrix form, operators allow real eigenvalues to be found, corresponding to measurements. Orthogonality allows a suitable basis set of vectors to represent the state of the quantum system. The eigenvalues of the operator are also evaluated in the same way as for the square matrix, by solving the characteristic polynomial:
det
(
A
^
−
a
I
^
)
=
0
,
{\displaystyle \det \left({\hat {A}}-a{\hat {I}}\right)=0,}
where I is the n × n identity matrix, as an operator it corresponds to the identity operator. For a discrete basis:
I
^
=
∑
i
|
ϕ
i
⟩
⟨
ϕ
i
|
{\displaystyle {\hat {I}}=\sum _{i}|\phi _{i}\rangle \langle \phi _{i}|}
while for a continuous basis:
I
^
=
∫
|
ϕ
⟩
⟨
ϕ
|
d
ϕ
{\displaystyle {\hat {I}}=\int |\phi \rangle \langle \phi |\mathrm {d} \phi }
=== Inverse of an operator ===
A non-singular operator
A
^
{\displaystyle {\hat {A}}}
has an inverse
A
^
−
1
{\displaystyle {\hat {A}}^{-1}}
defined by:
A
^
A
^
−
1
=
A
^
−
1
A
^
=
I
^
{\displaystyle {\hat {A}}{\hat {A}}^{-1}={\hat {A}}^{-1}{\hat {A}}={\hat {I}}}
If an operator has no inverse, it is a singular operator. In a finite-dimensional space, an operator is non-singular if and only if its determinant is nonzero:
det
(
A
^
)
≠
0
{\displaystyle \det \left({\hat {A}}\right)\neq 0}
and hence the determinant is zero for a singular operator.
=== Table of Quantum Mechanics operators ===
The operators used in quantum mechanics are collected in the table below (see for example). The bold-face vectors with circumflexes are not unit vectors, they are 3-vector operators; all three spatial components taken together.
=== Examples of applying quantum operators ===
The procedure for extracting information from a wave function is as follows. Consider the momentum p of a particle as an example. The momentum operator in position basis in one dimension is:
p
^
=
−
i
ℏ
∂
∂
x
{\displaystyle {\hat {p}}=-i\hbar {\frac {\partial }{\partial x}}}
Letting this act on ψ we obtain:
p
^
ψ
=
−
i
ℏ
∂
∂
x
ψ
,
{\displaystyle {\hat {p}}\psi =-i\hbar {\frac {\partial }{\partial x}}\psi ,}
if ψ is an eigenfunction of
p
^
{\displaystyle {\hat {p}}}
, then the momentum eigenvalue p is the value of the particle's momentum, found by:
−
i
ℏ
∂
∂
x
ψ
=
p
ψ
.
{\displaystyle -i\hbar {\frac {\partial }{\partial x}}\psi =p\psi .}
For three dimensions the momentum operator uses the nabla operator to become:
p
^
=
−
i
ℏ
∇
.
{\displaystyle \mathbf {\hat {p}} =-i\hbar \nabla .}
In Cartesian coordinates (using the standard Cartesian basis vectors ex, ey, ez) this can be written;
e
x
p
^
x
+
e
y
p
^
y
+
e
z
p
^
z
=
−
i
ℏ
(
e
x
∂
∂
x
+
e
y
∂
∂
y
+
e
z
∂
∂
z
)
,
{\displaystyle \mathbf {e} _{\mathrm {x} }{\hat {p}}_{x}+\mathbf {e} _{\mathrm {y} }{\hat {p}}_{y}+\mathbf {e} _{\mathrm {z} }{\hat {p}}_{z}=-i\hbar \left(\mathbf {e} _{\mathrm {x} }{\frac {\partial }{\partial x}}+\mathbf {e} _{\mathrm {y} }{\frac {\partial }{\partial y}}+\mathbf {e} _{\mathrm {z} }{\frac {\partial }{\partial z}}\right),}
that is:
p
^
x
=
−
i
ℏ
∂
∂
x
,
p
^
y
=
−
i
ℏ
∂
∂
y
,
p
^
z
=
−
i
ℏ
∂
∂
z
{\displaystyle {\hat {p}}_{x}=-i\hbar {\frac {\partial }{\partial x}},\quad {\hat {p}}_{y}=-i\hbar {\frac {\partial }{\partial y}},\quad {\hat {p}}_{z}=-i\hbar {\frac {\partial }{\partial z}}\,\!}
The process of finding eigenvalues is the same. Since this is a vector and operator equation, if ψ is an eigenfunction, then each component of the momentum operator will have an eigenvalue corresponding to that component of momentum. Acting
p
^
{\displaystyle \mathbf {\hat {p}} }
on ψ obtains:
p
^
x
ψ
=
−
i
ℏ
∂
∂
x
ψ
=
p
x
ψ
p
^
y
ψ
=
−
i
ℏ
∂
∂
y
ψ
=
p
y
ψ
p
^
z
ψ
=
−
i
ℏ
∂
∂
z
ψ
=
p
z
ψ
{\displaystyle {\begin{aligned}{\hat {p}}_{x}\psi &=-i\hbar {\frac {\partial }{\partial x}}\psi =p_{x}\psi \\{\hat {p}}_{y}\psi &=-i\hbar {\frac {\partial }{\partial y}}\psi =p_{y}\psi \\{\hat {p}}_{z}\psi &=-i\hbar {\frac {\partial }{\partial z}}\psi =p_{z}\psi \\\end{aligned}}\,\!}
== See also ==
== References == | Wikipedia/Operator_(quantum_mechanics) |
The Bethe–Salpeter equation (BSE, named after Hans Bethe and Edwin Salpeter) is an integral equation, the solution of which describes the structure of a relativistic two-body (particles) bound state in a covariant formalism quantum field theory (QFT). The equation was first published in 1950 at the end of a paper by Yoichiro Nambu, but without derivation.
Due to its common application in several branches of theoretical physics, the Bethe–Salpeter equation appears in many forms. One form often used in high energy physics is
Γ
(
P
,
p
)
=
∫
d
4
k
(
2
π
)
4
K
(
P
,
p
,
k
)
S
(
k
−
P
2
)
Γ
(
P
,
k
)
S
(
k
+
P
2
)
{\displaystyle \Gamma (P,p)=\int \!{\frac {d^{4}k}{(2\pi )^{4}}}\;K(P,p,k)\,S(k-{\tfrac {P}{2}})\,\Gamma (P,k)\,S(k+{\tfrac {P}{2}})}
where
Γ
{\displaystyle \Gamma }
is the Bethe–Salpeter amplitude (BSA),
K
{\displaystyle K}
the Green's function representing the interaction and
S
{\displaystyle S}
the dressed propagators of the two constituent particles.
In quantum theory, bound states are composite physical systems with lifetime significantly longer than the time scale of the interaction breaking their structure (otherwise the physical systems under consideration are called resonances), thus allowing ample time for constituents to interact. By accounting all possible interactions that can occur between the two constituents, the BSE is a tool to calculate properties of deep-bound states. The BSA as Its solution encodes the structure of the bound state under consideration.
As it can be derived via identifying bound-states with poles in the S-matrix of the 4-point function involving the constituent particles, the equation is related to the quantum-field description of scattering processes applying Green's functions.
As a general-purpose tool the applications of the BSE can be found in most quantum field theories. Examples include positronium (bound state of an electron–positron pair), excitons (bound states of an electron–hole pairs), and mesons (as quark-antiquark bound states).
Even for simple systems such as the positronium, the equation cannot be solved exactly under quantum electrodynamics (QED), despite its exact formulation. A reduction of the equation can be achieved without the exact solution. In the case where particle-pair production can be ignored, if one of the two fermion constituent is significantly more massive than the other, the system is simplified into the Dirac equation for the light particle under the external potential of the heavy one.
== Derivation ==
The starting point for the derivation of the Bethe–Salpeter equation is the two-particle (or four point) Dyson equation
G
=
S
1
S
2
+
S
1
S
2
K
12
G
{\displaystyle G=S_{1}\,S_{2}+S_{1}\,S_{2}\,K_{12}\,G}
in momentum space, where "G" is the two-particle Green function
⟨
Ω
|
ϕ
1
ϕ
2
ϕ
3
ϕ
4
|
Ω
⟩
{\displaystyle \langle \Omega |\phi _{1}\,\phi _{2}\,\phi _{3}\,\phi _{4}|\Omega \rangle }
, "S" are the free propagators and "K" is an interaction kernel, which contains all possible interactions between the two particles. The crucial step is now, to assume that bound states appear as poles in the Green function. One assumes, that two particles come together and form a bound state with mass "M", this bound state propagates freely, and then the bound state splits in its two constituents again. Therefore, one introduces the Bethe–Salpeter wave function
Ψ
=
⟨
Ω
|
ϕ
1
ϕ
2
|
ψ
⟩
{\displaystyle \Psi =\langle \Omega |\phi _{1}\,\phi _{2}|\psi \rangle }
, which is a transition amplitude of two constituents
ϕ
i
{\displaystyle \phi _{i}}
into a bound state
ψ
{\displaystyle \psi }
, and then makes an Ansatz for the Green function in the vicinity of the pole as
G
≈
Ψ
Ψ
¯
P
2
−
M
2
,
{\displaystyle G\approx {\frac {\Psi \;{\bar {\Psi }}}{P^{2}-M^{2}}},}
where P is the total momentum of the system. One sees, that if for this momentum the equation
P
2
=
M
2
{\displaystyle P^{2}=M^{2}}
holds, which is exactly the Einstein energy-momentum relation (with the Four-momentum
P
μ
=
(
E
/
c
,
p
→
)
{\displaystyle P_{\mu }=\left(E/c,{\vec {p}}\right)}
and
P
2
=
P
μ
P
μ
{\displaystyle P^{2}=P_{\mu }\,P^{\mu }}
), the four-point Green function contains a pole. If one plugs that Ansatz into the Dyson equation above, and sets the total momentum "P" such that the energy-momentum relation holds, on both sides of the term a pole appears.
Ψ
Ψ
¯
P
2
−
M
2
=
S
1
S
2
+
S
1
S
2
K
12
Ψ
Ψ
¯
P
2
−
M
2
{\displaystyle {\frac {\Psi \;{\bar {\Psi }}}{P^{2}-M^{2}}}=S_{1}\,S_{2}+S_{1}\,S_{2}\,K_{12}{\frac {\Psi \;{\bar {\Psi }}}{P^{2}-M^{2}}}}
Comparing the residues yields
Ψ
=
S
1
S
2
K
12
Ψ
,
{\displaystyle \Psi =S_{1}\,S_{2}\,K_{12}\Psi ,\,}
This is already the Bethe–Salpeter equation, written in terms of the Bethe–Salpeter wave functions. To obtain the above form one introduces the Bethe–Salpeter amplitudes "Γ"
Ψ
=
S
1
S
2
Γ
{\displaystyle \Psi =S_{1}\,S_{2}\,\Gamma }
and gets finally
Γ
=
K
12
S
1
S
2
Γ
{\displaystyle \Gamma =K_{12}\,S_{1}\,S_{2}\,\Gamma }
which is written down above, with the explicit momentum dependence.
== Rainbow-ladder approximation ==
In principle the interaction kernel K contains all possible two-particle-irreducible interactions that can occur between the two constituents. In order to carry out practical calculations one has to model it by choosing a subset of the interactions. As in quantum field theories, interaction is described via the exchange of particles (e.g. photons in QED, or gluons in quantum chromodynamics), other than contact interactions the most simple interaction is modeled by the exchange of only one of these force-carrying particles with a known propagator.
As the Bethe–Salpeter equation sums up the interaction infinitely many times from a perturbative view point, the resulting Feynman graph resembles the form of a ladder (or rainbow), hence the name of this approximation.
While in QED the ladder approximation caused problems with crossing symmetry and gauge invariance, indicating the inclusion of crossed-ladder terms. In quantum chromodynamics (QCD) this approximation is frequently used phenomenologically to calculate hadron mass and its structure in terms of Bethe—Salpeter amplitudes and Faddeev amplitudes, a well-known Ansatz of which is proposed by Maris and Tandy. Such an Ansatz for the dressed quark-gluon vertex within the rainbow-ladder truncation respects chiral symmetry and its dynamical breaking, which therefore is an important modeling of the strong nuclear interaction. As an example the structure of pions can be solved applying the Maris—Tandy Ansatz from the Bethe—Salpeter equation in Euclidean space.
== Normalization ==
As for solutions of any homogeneous equation, that of the Bethe–Salpeter equation is determined up to a numerical factor. This factor has to be specified by a certain normalization condition. For the Bethe–Salpeter amplitudes this is usually done by demanding probability conservation (similar to the normalization of the quantum mechanical wave function), which corresponds to the equation
2
P
μ
=
Γ
¯
(
∂
∂
P
μ
(
S
1
⊗
S
2
)
−
S
1
S
2
(
∂
∂
P
μ
K
)
S
1
S
2
)
Γ
{\displaystyle 2P_{\mu }={\bar {\Gamma }}\left({\frac {\partial }{\partial P_{\mu }}}\left(S_{1}\otimes S_{2}\right)-S_{1}\,S_{2}\,\left({\frac {\partial }{\partial P_{\mu }}}\,K\right)\,S_{1}\,S_{2}\right)\Gamma }
Normalizations to the charge and energy-momentum tensor of the bound state lead to the same equation. In the rainbow-ladder approximation this Interaction kernel does not depend on the total momentum of the Bethe–Salpeter amplitude, in which case the second term of the normalization condition vanishes. An alternative normalization based on the eigenvalue of the corresponding linear operator was derived by Nakanishi.
== Solution in the Minkowski space ==
The Bethe—Salpeter equation applies to all kinematic region of the Bethe—Salpeter amplitude. Consequently it determines the amplitudes where the functions are not continuous. Such singularities are usually located when the constituent momentum is timelike, which are not directly accessible from Euclidean-space solutions of this equation. Instead one develop methods to solve these type of integral equations directly in the timelike region. In the case of scalar bound states through a scalar-particle exchange in the rainbow-ladder truncation, the Bethe—Salpeter equation in the Minkowski space can be solved with the assistance of Nakanishi integral representation.
== See also ==
ABINIT
Araki–Sucher correction
Breit equation
Lippmann–Schwinger equation
Schwinger–Dyson equation
Two-body Dirac equations
YAMBO code
== References ==
== Bibliography ==
Many modern quantum field theory textbooks and a few articles provide pedagogical accounts for the Bethe–Salpeter equation's context and uses. See:
W. Greiner, J. Reinhardt (2003). Quantum Electrodynamics (3rd ed.). Springer. ISBN 978-3-540-44029-1.
Z.K. Silagadze (1998). "Wick–Cutkosky model: An introduction". arXiv:hep-ph/9803307.
Still a good introduction is given by the review article of Nakanishi
N. Nakanishi (1969). "A general survey of the theory of the Bethe–Salpeter equation". Progress of Theoretical Physics Supplement. 43: 1–81. Bibcode:1969PThPS..43....1N. doi:10.1143/PTPS.43.1.
For historical aspects, see
E.E. Salpeter (2008). "Bethe–Salpeter equation (origins)". Scholarpedia. 3 (11): 7483. arXiv:0811.1050. Bibcode:2008SchpJ...3.7483S. doi:10.4249/scholarpedia.7483. S2CID 32913032.
== External links to codes where the Bethe-Salpeter equation is coded ==
Yambo - plane-wave pseudopotential
BerkeleyGW – plane-wave pseudopotential
ExC - plane-wave pseudopotential
Fiesta - Gaussian all-electron
Abinit - plane-wave pseudopotential
VASP - plane-wave pseudopotential
For a more comprehensive list of first principles codes see here: List of quantum chemistry and solid-state physics software | Wikipedia/Bethe–Salpeter_equation |
In mathematics, a volume element provides a means for integrating a function with respect to volume in various coordinate systems such as spherical coordinates and cylindrical coordinates. Thus a volume element is an expression of the form
d
V
=
ρ
(
u
1
,
u
2
,
u
3
)
d
u
1
d
u
2
d
u
3
{\displaystyle \mathrm {d} V=\rho (u_{1},u_{2},u_{3})\,\mathrm {d} u_{1}\,\mathrm {d} u_{2}\,\mathrm {d} u_{3}}
where the
u
i
{\displaystyle u_{i}}
are the coordinates, so that the volume of any set
B
{\displaystyle B}
can be computed by
Volume
(
B
)
=
∫
B
ρ
(
u
1
,
u
2
,
u
3
)
d
u
1
d
u
2
d
u
3
.
{\displaystyle \operatorname {Volume} (B)=\int _{B}\rho (u_{1},u_{2},u_{3})\,\mathrm {d} u_{1}\,\mathrm {d} u_{2}\,\mathrm {d} u_{3}.}
For example, in spherical coordinates
d
V
=
u
1
2
sin
u
2
d
u
1
d
u
2
d
u
3
{\displaystyle \mathrm {d} V=u_{1}^{2}\sin u_{2}\,\mathrm {d} u_{1}\,\mathrm {d} u_{2}\,\mathrm {d} u_{3}}
, and so
ρ
=
u
1
2
sin
u
2
{\displaystyle \rho =u_{1}^{2}\sin u_{2}}
.
The notion of a volume element is not limited to three dimensions: in two dimensions it is often known as the area element, and in this setting it is useful for doing surface integrals. Under changes of coordinates, the volume element changes by the absolute value of the Jacobian determinant of the coordinate transformation (by the change of variables formula). This fact allows volume elements to be defined as a kind of measure on a manifold. On an orientable differentiable manifold, a volume element typically arises from a volume form: a top degree differential form. On a non-orientable manifold, the volume element is typically the absolute value of a (locally defined) volume form: it defines a 1-density.
== Volume element in Euclidean space ==
In Euclidean space, the volume element is given by the product of the differentials of the Cartesian coordinates
d
V
=
d
x
d
y
d
z
.
{\displaystyle \mathrm {d} V=\mathrm {d} x\,\mathrm {d} y\,\mathrm {d} z.}
In different coordinate systems of the form
x
=
x
(
u
1
,
u
2
,
u
3
)
{\displaystyle x=x(u_{1},u_{2},u_{3})}
,
y
=
y
(
u
1
,
u
2
,
u
3
)
{\displaystyle y=y(u_{1},u_{2},u_{3})}
,
z
=
z
(
u
1
,
u
2
,
u
3
)
{\displaystyle z=z(u_{1},u_{2},u_{3})}
, the volume element changes by the Jacobian (determinant) of the coordinate change:
d
V
=
|
∂
(
x
,
y
,
z
)
∂
(
u
1
,
u
2
,
u
3
)
|
d
u
1
d
u
2
d
u
3
.
{\displaystyle \mathrm {d} V=\left|{\frac {\partial (x,y,z)}{\partial (u_{1},u_{2},u_{3})}}\right|\,\mathrm {d} u_{1}\,\mathrm {d} u_{2}\,\mathrm {d} u_{3}.}
For example, in spherical coordinates (mathematical convention)
x
=
ρ
cos
θ
sin
ϕ
y
=
ρ
sin
θ
sin
ϕ
z
=
ρ
cos
ϕ
{\displaystyle {\begin{aligned}x&=\rho \cos \theta \sin \phi \\y&=\rho \sin \theta \sin \phi \\z&=\rho \cos \phi \end{aligned}}}
the Jacobian determinant is
|
∂
(
x
,
y
,
z
)
∂
(
ρ
,
ϕ
,
θ
)
|
=
ρ
2
sin
ϕ
{\displaystyle \left|{\frac {\partial (x,y,z)}{\partial (\rho ,\phi ,\theta )}}\right|=\rho ^{2}\sin \phi }
so that
d
V
=
ρ
2
sin
ϕ
d
ρ
d
θ
d
ϕ
.
{\displaystyle \mathrm {d} V=\rho ^{2}\sin \phi \,\mathrm {d} \rho \,\mathrm {d} \theta \,\mathrm {d} \phi .}
This can be seen as a special case of the fact that differential forms transform through a pullback
F
∗
{\displaystyle F^{*}}
as
F
∗
(
u
d
y
1
∧
⋯
∧
d
y
n
)
=
(
u
∘
F
)
det
(
∂
F
j
∂
x
i
)
d
x
1
∧
⋯
∧
d
x
n
{\displaystyle F^{*}(u\;dy^{1}\wedge \cdots \wedge dy^{n})=(u\circ F)\det \left({\frac {\partial F^{j}}{\partial x^{i}}}\right)\mathrm {d} x^{1}\wedge \cdots \wedge \mathrm {d} x^{n}}
== Volume element of a linear subspace ==
Consider the linear subspace of the n-dimensional Euclidean space Rn that is spanned by a collection of linearly independent vectors
X
1
,
…
,
X
k
.
{\displaystyle X_{1},\dots ,X_{k}.}
To find the volume element of the subspace, it is useful to know the fact from linear algebra that the volume of the parallelepiped spanned by the
X
i
{\displaystyle X_{i}}
is the square root of the determinant of the Gramian matrix of the
X
i
{\displaystyle X_{i}}
:
det
(
X
i
⋅
X
j
)
i
,
j
=
1
…
k
.
{\displaystyle {\sqrt {\det(X_{i}\cdot X_{j})_{i,j=1\dots k}}}.}
Any point p in the subspace can be given coordinates
(
u
1
,
u
2
,
…
,
u
k
)
{\displaystyle (u_{1},u_{2},\dots ,u_{k})}
such that
p
=
u
1
X
1
+
⋯
+
u
k
X
k
.
{\displaystyle p=u_{1}X_{1}+\cdots +u_{k}X_{k}.}
At a point p, if we form a small parallelepiped with sides
d
u
i
{\displaystyle \mathrm {d} u_{i}}
, then the volume of that parallelepiped is the square root of the determinant of the Grammian matrix
det
(
(
d
u
i
X
i
)
⋅
(
d
u
j
X
j
)
)
i
,
j
=
1
…
k
=
det
(
X
i
⋅
X
j
)
i
,
j
=
1
…
k
d
u
1
d
u
2
⋯
d
u
k
.
{\displaystyle {\sqrt {\det \left((du_{i}X_{i})\cdot (du_{j}X_{j})\right)_{i,j=1\dots k}}}={\sqrt {\det(X_{i}\cdot X_{j})_{i,j=1\dots k}}}\;\mathrm {d} u_{1}\,\mathrm {d} u_{2}\,\cdots \,\mathrm {d} u_{k}.}
This therefore defines the volume form in the linear subspace.
== Volume element of manifolds ==
On an oriented Riemannian manifold of dimension n, the volume element is a volume form equal to the Hodge dual of the unit constant function,
f
(
x
)
=
1
{\displaystyle f(x)=1}
:
ω
=
⋆
1.
{\displaystyle \omega =\star 1.}
Equivalently, the volume element is precisely the Levi-Civita tensor
ϵ
{\displaystyle \epsilon }
. In coordinates,
ω
=
ϵ
=
|
det
g
|
d
x
1
∧
⋯
∧
d
x
n
{\displaystyle \omega =\epsilon ={\sqrt {\left|\det g\right|}}\,\mathrm {d} x^{1}\wedge \cdots \wedge \mathrm {d} x^{n}}
where
det
g
{\displaystyle \det g}
is the determinant of the metric tensor g written in the coordinate system.
=== Area element of a surface ===
A simple example of a volume element can be explored by considering a two-dimensional surface embedded in n-dimensional Euclidean space. Such a volume element is sometimes called an area element. Consider a subset
U
⊂
R
2
{\displaystyle U\subset \mathbb {R} ^{2}}
and a mapping function
φ
:
U
→
R
n
{\displaystyle \varphi :U\to \mathbb {R} ^{n}}
thus defining a surface embedded in
R
n
{\displaystyle \mathbb {R} ^{n}}
. In two dimensions, volume is just area, and a volume element gives a way to determine the area of parts of the surface. Thus a volume element is an expression of the form
f
(
u
1
,
u
2
)
d
u
1
d
u
2
{\displaystyle f(u_{1},u_{2})\,\mathrm {d} u_{1}\,\mathrm {d} u_{2}}
that allows one to compute the area of a set B lying on the surface by computing the integral
Area
(
B
)
=
∫
B
f
(
u
1
,
u
2
)
d
u
1
d
u
2
.
{\displaystyle \operatorname {Area} (B)=\int _{B}f(u_{1},u_{2})\,\mathrm {d} u_{1}\,\mathrm {d} u_{2}.}
Here we will find the volume element on the surface that defines area in the usual sense. The Jacobian matrix of the mapping is
J
i
j
=
∂
φ
i
∂
u
j
{\displaystyle J_{ij}={\frac {\partial \varphi _{i}}{\partial u_{j}}}}
with index i running from 1 to n, and j running from 1 to 2. The Euclidean metric in the n-dimensional space induces a metric
g
=
J
T
J
{\displaystyle g=J^{T}J}
on the set U, with matrix elements
g
i
j
=
∑
k
=
1
n
J
k
i
J
k
j
=
∑
k
=
1
n
∂
φ
k
∂
u
i
∂
φ
k
∂
u
j
.
{\displaystyle g_{ij}=\sum _{k=1}^{n}J_{ki}J_{kj}=\sum _{k=1}^{n}{\frac {\partial \varphi _{k}}{\partial u_{i}}}{\frac {\partial \varphi _{k}}{\partial u_{j}}}.}
The determinant of the metric is given by
det
g
=
|
∂
φ
∂
u
1
∧
∂
φ
∂
u
2
|
2
=
det
(
J
T
J
)
{\displaystyle \det g=\left|{\frac {\partial \varphi }{\partial u_{1}}}\wedge {\frac {\partial \varphi }{\partial u_{2}}}\right|^{2}=\det(J^{T}J)}
For a regular surface, this determinant is non-vanishing; equivalently, the Jacobian matrix has rank 2.
Now consider a change of coordinates on U, given by a diffeomorphism
f
:
U
→
U
,
{\displaystyle f\colon U\to U,}
so that the coordinates
(
u
1
,
u
2
)
{\displaystyle (u_{1},u_{2})}
are given in terms of
(
v
1
,
v
2
)
{\displaystyle (v_{1},v_{2})}
by
(
u
1
,
u
2
)
=
f
(
v
1
,
v
2
)
{\displaystyle (u_{1},u_{2})=f(v_{1},v_{2})}
. The Jacobian matrix of this transformation is given by
F
i
j
=
∂
f
i
∂
v
j
.
{\displaystyle F_{ij}={\frac {\partial f_{i}}{\partial v_{j}}}.}
In the new coordinates, we have
∂
φ
i
∂
v
j
=
∑
k
=
1
2
∂
φ
i
∂
u
k
∂
f
k
∂
v
j
{\displaystyle {\frac {\partial \varphi _{i}}{\partial v_{j}}}=\sum _{k=1}^{2}{\frac {\partial \varphi _{i}}{\partial u_{k}}}{\frac {\partial f_{k}}{\partial v_{j}}}}
and so the metric transforms as
g
~
=
F
T
g
F
{\displaystyle {\tilde {g}}=F^{T}gF}
where
g
~
{\displaystyle {\tilde {g}}}
is the pullback metric in the v coordinate system. The determinant is
det
g
~
=
det
g
(
det
F
)
2
.
{\displaystyle \det {\tilde {g}}=\det g\left(\det F\right)^{2}.}
Given the above construction, it should now be straightforward to understand how the volume element is invariant under an orientation-preserving change of coordinates.
In two dimensions, the volume is just the area. The area of a subset
B
⊂
U
{\displaystyle B\subset U}
is given by the integral
Area
(
B
)
=
∬
B
det
g
d
u
1
d
u
2
=
∬
B
det
g
|
det
F
|
d
v
1
d
v
2
=
∬
B
det
g
~
d
v
1
d
v
2
.
{\displaystyle {\begin{aligned}{\mbox{Area}}(B)&=\iint _{B}{\sqrt {\det g}}\;\mathrm {d} u_{1}\;\mathrm {d} u_{2}\\[1.6ex]&=\iint _{B}{\sqrt {\det g}}\left|\det F\right|\;\mathrm {d} v_{1}\;\mathrm {d} v_{2}\\[1.6ex]&=\iint _{B}{\sqrt {\det {\tilde {g}}}}\;\mathrm {d} v_{1}\;\mathrm {d} v_{2}.\end{aligned}}}
Thus, in either coordinate system, the volume element takes the same expression: the expression of the volume element is invariant under a change of coordinates.
Note that there was nothing particular to two dimensions in the above presentation; the above trivially generalizes to arbitrary dimensions.
=== Example: Sphere ===
For example, consider the sphere with radius r centered at the origin in R3. This can be parametrized using spherical coordinates with the map
ϕ
(
u
1
,
u
2
)
=
(
r
cos
u
1
sin
u
2
,
r
sin
u
1
sin
u
2
,
r
cos
u
2
)
.
{\displaystyle \phi (u_{1},u_{2})=(r\cos u_{1}\sin u_{2},r\sin u_{1}\sin u_{2},r\cos u_{2}).}
Then
g
=
(
r
2
sin
2
u
2
0
0
r
2
)
,
{\displaystyle g={\begin{pmatrix}r^{2}\sin ^{2}u_{2}&0\\0&r^{2}\end{pmatrix}},}
and the area element is
ω
=
det
g
d
u
1
d
u
2
=
r
2
sin
u
2
d
u
1
d
u
2
.
{\displaystyle \omega ={\sqrt {\det g}}\;\mathrm {d} u_{1}\mathrm {d} u_{2}=r^{2}\sin u_{2}\,\mathrm {d} u_{1}\mathrm {d} u_{2}.}
== See also ==
Cylindrical coordinate system § Line and volume elements
Spherical coordinate system § Integration and differentiation in spherical coordinates
Volume integral
Surface integral
Line integral
Line element
== References ==
Besse, Arthur L. (1987), Einstein manifolds, Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)], vol. 10, Berlin, New York: Springer-Verlag, pp. xii+510, ISBN 978-3-540-15279-8 | Wikipedia/Differential_volume_element |
Maxwell's equations, or Maxwell–Heaviside equations, are a set of coupled partial differential equations that, together with the Lorentz force law, form the foundation of classical electromagnetism, classical optics, electric and magnetic circuits.
The equations provide a mathematical model for electric, optical, and radio technologies, such as power generation, electric motors, wireless communication, lenses, radar, etc. They describe how electric and magnetic fields are generated by charges, currents, and changes of the fields. The equations are named after the physicist and mathematician James Clerk Maxwell, who, in 1861 and 1862, published an early form of the equations that included the Lorentz force law. Maxwell first used the equations to propose that light is an electromagnetic phenomenon. The modern form of the equations in their most common formulation is credited to Oliver Heaviside.
Maxwell's equations may be combined to demonstrate how fluctuations in electromagnetic fields (waves) propagate at a constant speed in vacuum, c (299792458 m/s). Known as electromagnetic radiation, these waves occur at various wavelengths to produce a spectrum of radiation from radio waves to gamma rays.
In partial differential equation form and a coherent system of units, Maxwell's microscopic equations can be written as (top to bottom: Gauss's law, Gauss's law for magnetism, Faraday's law, Ampère-Maxwell law)
∇
⋅
E
=
ρ
ε
0
∇
⋅
B
=
0
∇
×
E
=
−
∂
B
∂
t
∇
×
B
=
μ
0
(
J
+
ε
0
∂
E
∂
t
)
{\displaystyle {\begin{aligned}\nabla \cdot \mathbf {E} \,\,\,&={\frac {\rho }{\varepsilon _{0}}}\\\nabla \cdot \mathbf {B} \,\,\,&=0\\\nabla \times \mathbf {E} &=-{\frac {\partial \mathbf {B} }{\partial t}}\\\nabla \times \mathbf {B} &=\mu _{0}\left(\mathbf {J} +\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}\right)\end{aligned}}}
With
E
{\displaystyle \mathbf {E} }
the electric field,
B
{\displaystyle \mathbf {B} }
the magnetic field,
ρ
{\displaystyle \rho }
the electric charge density and
J
{\displaystyle \mathbf {J} }
the current density.
ε
0
{\displaystyle \varepsilon _{0}}
is the vacuum permittivity and
μ
0
{\displaystyle \mu _{0}}
the vacuum permeability.
The equations have two major variants:
The microscopic equations have universal applicability but are unwieldy for common calculations. They relate the electric and magnetic fields to total charge and total current, including the complicated charges and currents in materials at the atomic scale.
The macroscopic equations define two new auxiliary fields that describe the large-scale behaviour of matter without having to consider atomic-scale charges and quantum phenomena like spins. However, their use requires experimentally determined parameters for a phenomenological description of the electromagnetic response of materials.
The term "Maxwell's equations" is often also used for equivalent alternative formulations. Versions of Maxwell's equations based on the electric and magnetic scalar potentials are preferred for explicitly solving the equations as a boundary value problem, analytical mechanics, or for use in quantum mechanics. The covariant formulation (on spacetime rather than space and time separately) makes the compatibility of Maxwell's equations with special relativity manifest. Maxwell's equations in curved spacetime, commonly used in high-energy and gravitational physics, are compatible with general relativity. In fact, Albert Einstein developed special and general relativity to accommodate the invariant speed of light, a consequence of Maxwell's equations, with the principle that only relative movement has physical consequences.
The publication of the equations marked the unification of a theory for previously separately described phenomena: magnetism, electricity, light, and associated radiation.
Since the mid-20th century, it has been understood that Maxwell's equations do not give an exact description of electromagnetic phenomena, but are instead a classical limit of the more precise theory of quantum electrodynamics.
== History of the equations ==
== Conceptual descriptions ==
=== Gauss's law ===
Gauss's law describes the relationship between an electric field and electric charges: an electric field points away from positive charges and towards negative charges, and the net outflow of the electric field through a closed surface is proportional to the enclosed charge, including bound charge due to polarization of material. The coefficient of the proportion is the permittivity of free space.
=== Gauss's law for magnetism ===
Gauss's law for magnetism states that electric charges have no magnetic analogues, called magnetic monopoles; no north or south magnetic poles exist in isolation. Instead, the magnetic field of a material is attributed to a dipole, and the net outflow of the magnetic field through a closed surface is zero. Magnetic dipoles may be represented as loops of current or inseparable pairs of equal and opposite "magnetic charges". Precisely, the total magnetic flux through a Gaussian surface is zero, and the magnetic field is a solenoidal vector field.
=== Faraday's law ===
The Maxwell–Faraday version of Faraday's law of induction describes how a time-varying magnetic field corresponds to the negative curl of an electric field. In integral form, it states that the work per unit charge required to move a charge around a closed loop equals the rate of change of the magnetic flux through the enclosed surface.
The electromagnetic induction is the operating principle behind many electric generators: for example, a rotating bar magnet creates a changing magnetic field and generates an electric field in a nearby wire.
=== Ampère–Maxwell law ===
The original law of Ampère states that magnetic fields relate to electric current. Maxwell's addition states that magnetic fields also relate to changing electric fields, which Maxwell called displacement current. The integral form states that electric and displacement currents are associated with a proportional magnetic field along any enclosing curve.
Maxwell's modification of Ampère's circuital law is important because the laws of Ampère and Gauss must otherwise be adjusted for static fields. As a consequence, it predicts that a rotating magnetic field occurs with a changing electric field. A further consequence is the existence of self-sustaining electromagnetic waves which travel through empty space.
The speed calculated for electromagnetic waves, which could be predicted from experiments on charges and currents, matches the speed of light; indeed, light is one form of electromagnetic radiation (as are X-rays, radio waves, and others). Maxwell understood the connection between electromagnetic waves and light in 1861, thereby unifying the theories of electromagnetism and optics.
== Formulation in terms of electric and magnetic fields (microscopic or in vacuum version) ==
In the electric and magnetic field formulation there are four equations that determine the fields for given charge and current distribution. A separate law of nature, the Lorentz force law, describes how the electric and magnetic fields act on charged particles and currents. By convention, a version of this law in the original equations by Maxwell is no longer included. The vector calculus formalism below, the work of Oliver Heaviside, has become standard. It is rotationally invariant, and therefore mathematically more transparent than Maxwell's original 20 equations in x, y and z components. The relativistic formulations are more symmetric and Lorentz invariant. For the same equations expressed using tensor calculus or differential forms (see § Alternative formulations).
The differential and integral formulations are mathematically equivalent; both are useful. The integral formulation relates fields within a region of space to fields on the boundary and can often be used to simplify and directly calculate fields from symmetric distributions of charges and currents. On the other hand, the differential equations are purely local and are a more natural starting point for calculating the fields in more complicated (less symmetric) situations, for example using finite element analysis.
=== Key to the notation ===
Symbols in bold represent vector quantities, and symbols in italics represent scalar quantities, unless otherwise indicated.
The equations introduce the electric field, E, a vector field, and the magnetic field, B, a pseudovector field, each generally having a time and location dependence.
The sources are
the total electric charge density (total charge per unit volume), ρ, and
the total electric current density (total current per unit area), J.
The universal constants appearing in the equations (the first two ones explicitly only in the SI formulation) are:
the permittivity of free space, ε0, and
the permeability of free space, μ0, and
the speed of light,
c
=
(
ε
0
μ
0
)
−
1
/
2
{\displaystyle c=({\varepsilon _{0}\mu _{0}})^{-1/2}}
==== Differential equations ====
In the differential equations,
the nabla symbol, ∇, denotes the three-dimensional gradient operator, del,
the ∇⋅ symbol (pronounced "del dot") denotes the divergence operator,
the ∇× symbol (pronounced "del cross") denotes the curl operator.
==== Integral equations ====
In the integral equations,
Ω is any volume with closed boundary surface ∂Ω, and
Σ is any surface with closed boundary curve ∂Σ,
The equations are a little easier to interpret with time-independent surfaces and volumes. Time-independent surfaces and volumes are "fixed" and do not change over a given time interval. For example, since the surface is time-independent, we can bring the differentiation under the integral sign in Faraday's law:
d
d
t
∬
Σ
B
⋅
d
S
=
∬
Σ
∂
B
∂
t
⋅
d
S
,
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\iint _{\Sigma }\mathbf {B} \cdot \mathrm {d} \mathbf {S} =\iint _{\Sigma }{\frac {\partial \mathbf {B} }{\partial t}}\cdot \mathrm {d} \mathbf {S} \,,}
Maxwell's equations can be formulated with possibly time-dependent surfaces and volumes by using the differential version and using Gauss' and Stokes' theorems as appropriate.
∫
∂
Ω
{\displaystyle {\vphantom {\int }}_{\scriptstyle \partial \Omega }}
is a surface integral over the boundary surface ∂Ω, with the loop indicating the surface is closed
∭
Ω
{\displaystyle \iiint _{\Omega }}
is a volume integral over the volume Ω,
∮
∂
Σ
{\displaystyle \oint _{\partial \Sigma }}
is a line integral around the boundary curve ∂Σ, with the loop indicating the curve is closed.
∬
Σ
{\displaystyle \iint _{\Sigma }}
is a surface integral over the surface Σ,
The total electric charge Q enclosed in Ω is the volume integral over Ω of the charge density ρ (see the "macroscopic formulation" section below):
Q
=
∭
Ω
ρ
d
V
,
{\displaystyle Q=\iiint _{\Omega }\rho \ \mathrm {d} V,}
where dV is the volume element.
The net magnetic flux ΦB is the surface integral of the magnetic field B passing through a fixed surface, Σ:
Φ
B
=
∬
Σ
B
⋅
d
S
,
{\displaystyle \Phi _{B}=\iint _{\Sigma }\mathbf {B} \cdot \mathrm {d} \mathbf {S} ,}
The net electric flux ΦE is the surface integral of the electric field E passing through Σ:
Φ
E
=
∬
Σ
E
⋅
d
S
,
{\displaystyle \Phi _{E}=\iint _{\Sigma }\mathbf {E} \cdot \mathrm {d} \mathbf {S} ,}
The net electric current I is the surface integral of the electric current density J passing through Σ:
I
=
∬
Σ
J
⋅
d
S
,
{\displaystyle I=\iint _{\Sigma }\mathbf {J} \cdot \mathrm {d} \mathbf {S} ,}
where dS denotes the differential vector element of surface area S, normal to surface Σ. (Vector area is sometimes denoted by A rather than S, but this conflicts with the notation for magnetic vector potential).
=== Formulation in the SI ===
=== Formulation in the Gaussian system ===
The definitions of charge, electric field, and magnetic field can be altered to simplify theoretical calculation, by absorbing dimensioned factors of ε0 and μ0 into the units (and thus redefining these). With a corresponding change in the values of the quantities for the Lorentz force law this yields the same physics, i.e. trajectories of charged particles, or work done by an electric motor. These definitions are often preferred in theoretical and high energy physics where it is natural to take the electric and magnetic field with the same units, to simplify the appearance of the electromagnetic tensor: the Lorentz covariant object unifying electric and magnetic field would then contain components with uniform unit and dimension.: vii Such modified definitions are conventionally used with the Gaussian (CGS) units. Using these definitions, colloquially "in Gaussian units",
the Maxwell equations become:
The equations simplify slightly when a system of quantities is chosen in the speed of light, c, is used for nondimensionalization, so that, for example, seconds and lightseconds are interchangeable, and c = 1.
Further changes are possible by absorbing factors of 4π. This process, called rationalization, affects whether Coulomb's law or Gauss's law includes such a factor (see Heaviside–Lorentz units, used mainly in particle physics).
== Relationship between differential and integral formulations ==
The equivalence of the differential and integral formulations are a consequence of the Gauss divergence theorem and the Kelvin–Stokes theorem.
=== Flux and divergence ===
According to the (purely mathematical) Gauss divergence theorem, the electric flux through the
boundary surface ∂Ω can be rewritten as
∮
∂
Ω
E
⋅
d
S
=
∭
Ω
∇
⋅
E
d
V
{\displaystyle {\vphantom {\oint }}_{\scriptstyle \partial \Omega }\mathbf {E} \cdot \mathrm {d} \mathbf {S} =\iiint _{\Omega }\nabla \cdot \mathbf {E} \,\mathrm {d} V}
The integral version of Gauss's equation can thus be rewritten as
∭
Ω
(
∇
⋅
E
−
ρ
ε
0
)
d
V
=
0
{\displaystyle \iiint _{\Omega }\left(\nabla \cdot \mathbf {E} -{\frac {\rho }{\varepsilon _{0}}}\right)\,\mathrm {d} V=0}
Since Ω is arbitrary (e.g. an arbitrary small ball with arbitrary center), this is satisfied if and only if the integrand is zero everywhere. This is
the differential equations formulation of Gauss equation up to a trivial rearrangement.
Similarly rewriting the magnetic flux in Gauss's law for magnetism in integral form gives
∮
∂
Ω
B
⋅
d
S
=
∭
Ω
∇
⋅
B
d
V
=
0.
{\displaystyle {\vphantom {\oint }}_{\scriptstyle \partial \Omega }\mathbf {B} \cdot \mathrm {d} \mathbf {S} =\iiint _{\Omega }\nabla \cdot \mathbf {B} \,\mathrm {d} V=0.}
which is satisfied for all Ω if and only if
∇
⋅
B
=
0
{\displaystyle \nabla \cdot \mathbf {B} =0}
everywhere.
=== Circulation and curl ===
By the Kelvin–Stokes theorem we can rewrite the line integrals of the fields around the closed boundary curve ∂Σ to an integral of the "circulation of the fields" (i.e. their curls) over a surface it bounds, i.e.
∮
∂
Σ
B
⋅
d
ℓ
=
∬
Σ
(
∇
×
B
)
⋅
d
S
,
{\displaystyle \oint _{\partial \Sigma }\mathbf {B} \cdot \mathrm {d} {\boldsymbol {\ell }}=\iint _{\Sigma }(\nabla \times \mathbf {B} )\cdot \mathrm {d} \mathbf {S} ,}
Hence the Ampère–Maxwell law, the modified version of Ampère's circuital law, in integral form can be rewritten as
∬
Σ
(
∇
×
B
−
μ
0
(
J
+
ε
0
∂
E
∂
t
)
)
⋅
d
S
=
0.
{\displaystyle \iint _{\Sigma }\left(\nabla \times \mathbf {B} -\mu _{0}\left(\mathbf {J} +\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}\right)\right)\cdot \mathrm {d} \mathbf {S} =0.}
Since Σ can be chosen arbitrarily, e.g. as an arbitrary small, arbitrary oriented, and arbitrary centered disk, we conclude that the integrand is zero if and only if the Ampère–Maxwell law in differential equations form is satisfied.
The equivalence of Faraday's law in differential and integral form follows likewise.
The line integrals and curls are analogous to quantities in classical fluid dynamics: the circulation of a fluid is the line integral of the fluid's flow velocity field around a closed loop, and the vorticity of the fluid is the curl of the velocity field.
== Charge conservation ==
The invariance of charge can be derived as a corollary of Maxwell's equations. The left-hand side of the Ampère–Maxwell law has zero divergence by the div–curl identity. Expanding the divergence of the right-hand side, interchanging derivatives, and applying Gauss's law gives:
0
=
∇
⋅
(
∇
×
B
)
=
∇
⋅
(
μ
0
(
J
+
ε
0
∂
E
∂
t
)
)
=
μ
0
(
∇
⋅
J
+
ε
0
∂
∂
t
∇
⋅
E
)
=
μ
0
(
∇
⋅
J
+
∂
ρ
∂
t
)
{\displaystyle 0=\nabla \cdot (\nabla \times \mathbf {B} )=\nabla \cdot \left(\mu _{0}\left(\mathbf {J} +\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}\right)\right)=\mu _{0}\left(\nabla \cdot \mathbf {J} +\varepsilon _{0}{\frac {\partial }{\partial t}}\nabla \cdot \mathbf {E} \right)=\mu _{0}\left(\nabla \cdot \mathbf {J} +{\frac {\partial \rho }{\partial t}}\right)}
i.e.,
∂
ρ
∂
t
+
∇
⋅
J
=
0.
{\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot \mathbf {J} =0.}
By the Gauss divergence theorem, this means the rate of change of charge in a fixed volume equals the net current flowing through the boundary:
d
d
t
Q
Ω
=
d
d
t
∭
Ω
ρ
d
V
=
−
{\displaystyle {\frac {d}{dt}}Q_{\Omega }={\frac {d}{dt}}\iiint _{\Omega }\rho \mathrm {d} V=-}
∮
∂
Ω
J
⋅
d
S
=
−
I
∂
Ω
.
{\displaystyle {\vphantom {\oint }}_{\scriptstyle \partial \Omega }\mathbf {J} \cdot {\rm {d}}\mathbf {S} =-I_{\partial \Omega }.}
In particular, in an isolated system the total charge is conserved.
== Vacuum equations, electromagnetic waves and speed of light ==
In a region with no charges (ρ = 0) and no currents (J = 0), such as in vacuum, Maxwell's equations reduce to:
∇
⋅
E
=
0
,
∇
×
E
+
∂
B
∂
t
=
0
,
∇
⋅
B
=
0
,
∇
×
B
−
μ
0
ε
0
∂
E
∂
t
=
0.
{\displaystyle {\begin{aligned}\nabla \cdot \mathbf {E} &=0,&\nabla \times \mathbf {E} +{\frac {\partial \mathbf {B} }{\partial t}}=0,\\\nabla \cdot \mathbf {B} &=0,&\nabla \times \mathbf {B} -\mu _{0}\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}=0.\end{aligned}}}
Taking the curl (∇×) of the curl equations, and using the curl of the curl identity we obtain
μ
0
ε
0
∂
2
E
∂
t
2
−
∇
2
E
=
0
,
μ
0
ε
0
∂
2
B
∂
t
2
−
∇
2
B
=
0.
{\displaystyle {\begin{aligned}\mu _{0}\varepsilon _{0}{\frac {\partial ^{2}\mathbf {E} }{\partial t^{2}}}-\nabla ^{2}\mathbf {E} =0,\\\mu _{0}\varepsilon _{0}{\frac {\partial ^{2}\mathbf {B} }{\partial t^{2}}}-\nabla ^{2}\mathbf {B} =0.\end{aligned}}}
The quantity
μ
0
ε
0
{\displaystyle \mu _{0}\varepsilon _{0}}
has the dimension (T/L)2. Defining
c
=
(
μ
0
ε
0
)
−
1
/
2
{\displaystyle c=(\mu _{0}\varepsilon _{0})^{-1/2}}
, the equations above have the form of the standard wave equations
1
c
2
∂
2
E
∂
t
2
−
∇
2
E
=
0
,
1
c
2
∂
2
B
∂
t
2
−
∇
2
B
=
0.
{\displaystyle {\begin{aligned}{\frac {1}{c^{2}}}{\frac {\partial ^{2}\mathbf {E} }{\partial t^{2}}}-\nabla ^{2}\mathbf {E} =0,\\{\frac {1}{c^{2}}}{\frac {\partial ^{2}\mathbf {B} }{\partial t^{2}}}-\nabla ^{2}\mathbf {B} =0.\end{aligned}}}
Already during Maxwell's lifetime, it was found that the known values for
ε
0
{\displaystyle \varepsilon _{0}}
and
μ
0
{\displaystyle \mu _{0}}
give
c
≈
2.998
×
10
8
m/s
{\displaystyle c\approx 2.998\times 10^{8}~{\text{m/s}}}
, then already known to be the speed of light in free space. This led him to propose that light and radio waves were propagating electromagnetic waves, since amply confirmed. In the old SI system of units, the values of
μ
0
=
4
π
×
10
−
7
{\displaystyle \mu _{0}=4\pi \times 10^{-7}}
and
c
=
299
792
458
m/s
{\displaystyle c=299\,792\,458~{\text{m/s}}}
are defined constants, (which means that by definition
ε
0
=
8.854
187
8...
×
10
−
12
F/m
{\displaystyle \varepsilon _{0}=8.854\,187\,8...\times 10^{-12}~{\text{F/m}}}
) that define the ampere and the metre. In the new SI system, only c keeps its defined value, and the electron charge gets a defined value.
In materials with relative permittivity, εr, and relative permeability, μr, the phase velocity of light becomes
v
p
=
1
μ
0
μ
r
ε
0
ε
r
,
{\displaystyle v_{\text{p}}={\frac {1}{\sqrt {\mu _{0}\mu _{\text{r}}\varepsilon _{0}\varepsilon _{\text{r}}}}},}
which is usually less than c.
In addition, E and B are perpendicular to each other and to the direction of wave propagation, and are in phase with each other. A sinusoidal plane wave is one special solution of these equations. Maxwell's equations explain how these waves can physically propagate through space. The changing magnetic field creates a changing electric field through Faraday's law. In turn, that electric field creates a changing magnetic field through Maxwell's modification of Ampère's circuital law. This perpetual cycle allows these waves, now known as electromagnetic radiation, to move through space at velocity c.
== Macroscopic formulation ==
The above equations are the microscopic version of Maxwell's equations, expressing the electric and the magnetic fields in terms of the (possibly atomic-level) charges and currents present. This is sometimes called the "general" form, but the macroscopic version below is equally general, the difference being one of bookkeeping.
The microscopic version is sometimes called "Maxwell's equations in vacuum": this refers to the fact that the material medium is not built into the structure of the equations, but appears only in the charge and current terms. The microscopic version was introduced by Lorentz, who tried to use it to derive the macroscopic properties of bulk matter from its microscopic constituents.: 5
"Maxwell's macroscopic equations", also known as Maxwell's equations in matter, are more similar to those that Maxwell introduced himself.
In the macroscopic equations, the influence of bound charge Qb and bound current Ib is incorporated into the displacement field D and the magnetizing field H, while the equations depend only on the free charges Qf and free currents If. This reflects a splitting of the total electric charge Q and current I (and their densities ρ and J) into free and bound parts:
Q
=
Q
f
+
Q
b
=
∭
Ω
(
ρ
f
+
ρ
b
)
d
V
=
∭
Ω
ρ
d
V
,
I
=
I
f
+
I
b
=
∬
Σ
(
J
f
+
J
b
)
⋅
d
S
=
∬
Σ
J
⋅
d
S
.
{\displaystyle {\begin{aligned}Q&=Q_{\text{f}}+Q_{\text{b}}=\iiint _{\Omega }\left(\rho _{\text{f}}+\rho _{\text{b}}\right)\,\mathrm {d} V=\iiint _{\Omega }\rho \,\mathrm {d} V,\\I&=I_{\text{f}}+I_{\text{b}}=\iint _{\Sigma }\left(\mathbf {J} _{\text{f}}+\mathbf {J} _{\text{b}}\right)\cdot \mathrm {d} \mathbf {S} =\iint _{\Sigma }\mathbf {J} \cdot \mathrm {d} \mathbf {S} .\end{aligned}}}
The cost of this splitting is that the additional fields D and H need to be determined through phenomenological constituent equations relating these fields to the electric field E and the magnetic field B, together with the bound charge and current.
See below for a detailed description of the differences between the microscopic equations, dealing with total charge and current including material contributions, useful in air/vacuum;
and the macroscopic equations, dealing with free charge and current, practical to use within materials.
=== Bound charge and current ===
When an electric field is applied to a dielectric material its molecules respond by forming microscopic electric dipoles – their atomic nuclei move a tiny distance in the direction of the field, while their electrons move a tiny distance in the opposite direction. This produces a macroscopic bound charge in the material even though all of the charges involved are bound to individual molecules. For example, if every molecule responds the same, similar to that shown in the figure, these tiny movements of charge combine to produce a layer of positive bound charge on one side of the material and a layer of negative charge on the other side. The bound charge is most conveniently described in terms of the polarization P of the material, its dipole moment per unit volume. If P is uniform, a macroscopic separation of charge is produced only at the surfaces where P enters and leaves the material. For non-uniform P, a charge is also produced in the bulk.
Somewhat similarly, in all materials the constituent atoms exhibit magnetic moments that are intrinsically linked to the angular momentum of the components of the atoms, most notably their electrons. The connection to angular momentum suggests the picture of an assembly of microscopic current loops. Outside the material, an assembly of such microscopic current loops is not different from a macroscopic current circulating around the material's surface, despite the fact that no individual charge is traveling a large distance. These bound currents can be described using the magnetization M.
The very complicated and granular bound charges and bound currents, therefore, can be represented on the macroscopic scale in terms of P and M, which average these charges and currents on a sufficiently large scale so as not to see the granularity of individual atoms, but also sufficiently small that they vary with location in the material. As such, Maxwell's macroscopic equations ignore many details on a fine scale that can be unimportant to understanding matters on a gross scale by calculating fields that are averaged over some suitable volume.
=== Auxiliary fields, polarization and magnetization ===
The definitions of the auxiliary fields are:
D
(
r
,
t
)
=
ε
0
E
(
r
,
t
)
+
P
(
r
,
t
)
,
H
(
r
,
t
)
=
1
μ
0
B
(
r
,
t
)
−
M
(
r
,
t
)
,
{\displaystyle {\begin{aligned}\mathbf {D} (\mathbf {r} ,t)&=\varepsilon _{0}\mathbf {E} (\mathbf {r} ,t)+\mathbf {P} (\mathbf {r} ,t),\\\mathbf {H} (\mathbf {r} ,t)&={\frac {1}{\mu _{0}}}\mathbf {B} (\mathbf {r} ,t)-\mathbf {M} (\mathbf {r} ,t),\end{aligned}}}
where P is the polarization field and M is the magnetization field, which are defined in terms of microscopic bound charges and bound currents respectively. The macroscopic bound charge density ρb and bound current density Jb in terms of polarization P and magnetization M are then defined as
ρ
b
=
−
∇
⋅
P
,
J
b
=
∇
×
M
+
∂
P
∂
t
.
{\displaystyle {\begin{aligned}\rho _{\text{b}}&=-\nabla \cdot \mathbf {P} ,\\\mathbf {J} _{\text{b}}&=\nabla \times \mathbf {M} +{\frac {\partial \mathbf {P} }{\partial t}}.\end{aligned}}}
If we define the total, bound, and free charge and current density by
ρ
=
ρ
b
+
ρ
f
,
J
=
J
b
+
J
f
,
{\displaystyle {\begin{aligned}\rho &=\rho _{\text{b}}+\rho _{\text{f}},\\\mathbf {J} &=\mathbf {J} _{\text{b}}+\mathbf {J} _{\text{f}},\end{aligned}}}
and use the defining relations above to eliminate D, and H, the "macroscopic" Maxwell's equations reproduce the "microscopic" equations.
=== Constitutive relations ===
In order to apply 'Maxwell's macroscopic equations', it is necessary to specify the relations between displacement field D and the electric field E, as well as the magnetizing field H and the magnetic field B. Equivalently, we have to specify the dependence of the polarization P (hence the bound charge) and the magnetization M (hence the bound current) on the applied electric and magnetic field. The equations specifying this response are called constitutive relations. For real-world materials, the constitutive relations are rarely simple, except approximately, and usually determined by experiment. See the main article on constitutive relations for a fuller description.: 44–45
For materials without polarization and magnetization, the constitutive relations are (by definition): 2
D
=
ε
0
E
,
H
=
1
μ
0
B
,
{\displaystyle \mathbf {D} =\varepsilon _{0}\mathbf {E} ,\quad \mathbf {H} ={\frac {1}{\mu _{0}}}\mathbf {B} ,}
where ε0 is the permittivity of free space and μ0 the permeability of free space. Since there is no bound charge, the total and the free charge and current are equal.
An alternative viewpoint on the microscopic equations is that they are the macroscopic equations together with the statement that vacuum behaves like a perfect linear "material" without additional polarization and magnetization.
More generally, for linear materials the constitutive relations are: 44–45
D
=
ε
E
,
H
=
1
μ
B
,
{\displaystyle \mathbf {D} =\varepsilon \mathbf {E} ,\quad \mathbf {H} ={\frac {1}{\mu }}\mathbf {B} ,}
where ε is the permittivity and μ the permeability of the material. For the displacement field D the linear approximation is usually excellent because for all but the most extreme electric fields or temperatures obtainable in the laboratory (high power pulsed lasers) the interatomic electric fields of materials of the order of 1011 V/m are much higher than the external field. For the magnetizing field
H
{\displaystyle \mathbf {H} }
, however, the linear approximation can break down in common materials like iron leading to phenomena like hysteresis. Even the linear case can have various complications, however.
For homogeneous materials, ε and μ are constant throughout the material, while for inhomogeneous materials they depend on location within the material (and perhaps time).: 463
For isotropic materials, ε and μ are scalars, while for anisotropic materials (e.g. due to crystal structure) they are tensors.: 421 : 463
Materials are generally dispersive, so ε and μ depend on the frequency of any incident EM waves.: 625 : 397
Even more generally, in the case of non-linear materials (see for example nonlinear optics), D and P are not necessarily proportional to E, similarly H or M is not necessarily proportional to B. In general D and H depend on both E and B, on location and time, and possibly other physical quantities.
In applications one also has to describe how the free currents and charge density behave in terms of E and B possibly coupled to other physical quantities like pressure, and the mass, number density, and velocity of charge-carrying particles. E.g., the original equations given by Maxwell (see History of Maxwell's equations) included Ohm's law in the form
J
f
=
σ
E
.
{\displaystyle \mathbf {J} _{\text{f}}=\sigma \mathbf {E} .}
== Alternative formulations ==
Following are some of the several other mathematical formalisms of Maxwell's equations, with the columns separating the two homogeneous Maxwell equations from the two inhomogeneous ones. Each formulation has versions directly in terms of the electric and magnetic fields, and indirectly in terms of the electrical potential φ and the vector potential A. Potentials were introduced as a convenient way to solve the homogeneous equations, but it was thought that all observable physics was contained in the electric and magnetic fields (or relativistically, the Faraday tensor). The potentials play a central role in quantum mechanics, however, and act quantum mechanically with observable consequences even when the electric and magnetic fields vanish (Aharonov–Bohm effect).
Each table describes one formalism. See the main article for details of each formulation.
The direct spacetime formulations make manifest that the Maxwell equations are relativistically invariant, where space and time are treated on equal footing. Because of this symmetry, the electric and magnetic fields are treated on equal footing and are recognized as components of the Faraday tensor. This reduces the four Maxwell equations to two, which simplifies the equations, although we can no longer use the familiar vector formulation. Maxwell equations in formulation that do not treat space and time manifestly on the same footing have Lorentz invariance as a hidden symmetry. This was a major source of inspiration for the development of relativity theory. Indeed, even the formulation that treats space and time separately is not a non-relativistic approximation and describes the same physics by simply renaming variables. For this reason the relativistic invariant equations are usually called the Maxwell equations as well.
Each table below describes one formalism.
In the tensor calculus formulation, the electromagnetic tensor Fαβ is an antisymmetric covariant order 2 tensor; the four-potential, Aα, is a covariant vector; the current, Jα, is a vector; the square brackets, [ ], denote antisymmetrization of indices; ∂α is the partial derivative with respect to the coordinate, xα. In Minkowski space coordinates are chosen with respect to an inertial frame; (xα) = (ct, x, y, z), so that the metric tensor used to raise and lower indices is ηαβ = diag(1, −1, −1, −1). The d'Alembert operator on Minkowski space is ◻ = ∂α∂α as in the vector formulation. In general spacetimes, the coordinate system xα is arbitrary, the covariant derivative ∇α, the Ricci tensor, Rαβ and raising and lowering of indices are defined by the Lorentzian metric, gαβ and the d'Alembert operator is defined as ◻ = ∇α∇α. The topological restriction is that the second real cohomology group of the space vanishes (see the differential form formulation for an explanation). This is violated for Minkowski space with a line removed, which can model a (flat) spacetime with a point-like monopole on the complement of the line.
In the differential form formulation on arbitrary space times, F = 1/2Fαβdxα ∧ dxβ is the electromagnetic tensor considered as a 2-form, A = Aαdxα is the potential 1-form,
J
=
−
J
α
⋆
d
x
α
{\displaystyle J=-J_{\alpha }{\star }\mathrm {d} x^{\alpha }}
is the current 3-form, d is the exterior derivative, and
⋆
{\displaystyle {\star }}
is the Hodge star on forms defined (up to its orientation, i.e. its sign) by the Lorentzian metric of spacetime. In the special case of 2-forms such as F, the Hodge star
⋆
{\displaystyle {\star }}
depends on the metric tensor only for its local scale. This means that, as formulated, the differential form field equations are conformally invariant, but the Lorenz gauge condition breaks conformal invariance. The operator
◻
=
(
−
⋆
d
⋆
d
−
d
⋆
d
⋆
)
{\displaystyle \Box =(-{\star }\mathrm {d} {\star }\mathrm {d} -\mathrm {d} {\star }\mathrm {d} {\star })}
is the d'Alembert–Laplace–Beltrami operator on 1-forms on an arbitrary Lorentzian spacetime. The topological condition is again that the second real cohomology group is 'trivial' (meaning that its form follows from a definition). By the isomorphism with the second de Rham cohomology this condition means that every closed 2-form is exact.
Other formalisms include the geometric algebra formulation and a matrix representation of Maxwell's equations. Historically, a quaternionic formulation was used.
== Solutions ==
Maxwell's equations are partial differential equations that relate the electric and magnetic fields to each other and to the electric charges and currents. Often, the charges and currents are themselves dependent on the electric and magnetic fields via the Lorentz force equation and the constitutive relations. These all form a set of coupled partial differential equations which are often very difficult to solve: the solutions encompass all the diverse phenomena of classical electromagnetism. Some general remarks follow.
As for any differential equation, boundary conditions and initial conditions are necessary for a unique solution. For example, even with no charges and no currents anywhere in spacetime, there are the obvious solutions for which E and B are zero or constant, but there are also non-trivial solutions corresponding to electromagnetic waves. In some cases, Maxwell's equations are solved over the whole of space, and boundary conditions are given as asymptotic limits at infinity. In other cases, Maxwell's equations are solved in a finite region of space, with appropriate conditions on the boundary of that region, for example an artificial absorbing boundary representing the rest of the universe, or periodic boundary conditions, or walls that isolate a small region from the outside world (as with a waveguide or cavity resonator).
Jefimenko's equations (or the closely related Liénard–Wiechert potentials) are the explicit solution to Maxwell's equations for the electric and magnetic fields created by any given distribution of charges and currents. It assumes specific initial conditions to obtain the so-called "retarded solution", where the only fields present are the ones created by the charges. However, Jefimenko's equations are unhelpful in situations when the charges and currents are themselves affected by the fields they create.
Numerical methods for differential equations can be used to compute approximate solutions of Maxwell's equations when exact solutions are impossible. These include the finite element method and finite-difference time-domain method. For more details, see Computational electromagnetics.
== Overdetermination of Maxwell's equations ==
Maxwell's equations seem overdetermined, in that they involve six unknowns (the three components of E and B) but eight equations (one for each of the two Gauss's laws, three vector components each for Faraday's and Ampère's circuital laws). (The currents and charges are not unknowns, being freely specifiable subject to charge conservation.) This is related to a certain limited kind of redundancy in Maxwell's equations: It can be proven that any system satisfying Faraday's law and Ampère's circuital law automatically also satisfies the two Gauss's laws, as long as the system's initial condition does, and assuming conservation of charge and the nonexistence of magnetic monopoles.
This explanation was first introduced by Julius Adams Stratton in 1941.
Although it is possible to simply ignore the two Gauss's laws in a numerical algorithm (apart from the initial conditions), the imperfect precision of the calculations can lead to ever-increasing violations of those laws. By introducing dummy variables characterizing these violations, the four equations become not overdetermined after all. The resulting formulation can lead to more accurate algorithms that take all four laws into account.
Both identities
∇
⋅
∇
×
B
≡
0
,
∇
⋅
∇
×
E
≡
0
{\displaystyle \nabla \cdot \nabla \times \mathbf {B} \equiv 0,\nabla \cdot \nabla \times \mathbf {E} \equiv 0}
, which reduce eight equations to six independent ones, are the true reason of overdetermination.
Equivalently, the overdetermination can be viewed as implying conservation of electric and magnetic charge, as they are required in the derivation described above but implied by the two Gauss's laws.
For linear algebraic equations, one can make 'nice' rules to rewrite the equations and unknowns. The equations can be linearly dependent. But in differential equations, and especially partial differential equations (PDEs), one needs appropriate boundary conditions, which depend in not so obvious ways on the equations. Even more, if one rewrites them in terms of vector and scalar potential, then the equations are underdetermined because of gauge fixing.
== Maxwell's equations as the classical limit of QED ==
Maxwell's equations and the Lorentz force law (along with the rest of classical electromagnetism) are extraordinarily successful at explaining and predicting a variety of phenomena. However, they do not account for quantum effects, and so their domain of applicability is limited. Maxwell's equations are thought of as the classical limit of quantum electrodynamics (QED).
Some observed electromagnetic phenomena cannot be explained with Maxwell's equations if the source of the electromagnetic fields are the classical distributions of charge and current. These include photon–photon scattering and many other phenomena related to photons or virtual photons, "nonclassical light" and quantum entanglement of electromagnetic fields (see Quantum optics). E.g. quantum cryptography cannot be described by Maxwell theory, not even approximately. The approximate nature of Maxwell's equations becomes more and more apparent when going into the extremely strong field regime (see Euler–Heisenberg Lagrangian) or to extremely small distances.
Finally, Maxwell's equations cannot explain any phenomenon involving individual photons interacting with quantum matter, such as the photoelectric effect, Planck's law, the Duane–Hunt law, and single-photon light detectors. However, many such phenomena may be explained using a halfway theory of quantum matter coupled to a classical electromagnetic field, either as external field or with the expected value of the charge current and density on the right hand side of Maxwell's equations. This is known as semiclassical theory or self-field QED and was initially discovered by de Broglie and Schrödinger and later fully developed by E.T. Jaynes and A.O. Barut.
== Variations ==
Popular variations on the Maxwell equations as a classical theory of electromagnetic fields are relatively scarce because the standard equations have stood the test of time remarkably well.
=== Magnetic monopoles ===
Maxwell's equations posit that there is electric charge, but no magnetic charge (also called magnetic monopoles), in the universe. Indeed, magnetic charge has never been observed, despite extensive searches, and may not exist. If they did exist, both Gauss's law for magnetism and Faraday's law would need to be modified, and the resulting four equations would be fully symmetric under the interchange of electric and magnetic fields.: 273–275
== See also ==
== Explanatory notes ==
== References ==
== Further reading ==
Imaeda, K. (1995), "Biquaternionic Formulation of Maxwell's Equations and their Solutions", in Ablamowicz, Rafał; Lounesto, Pertti (eds.), Clifford Algebras and Spinor Structures, Springer, pp. 265–280, doi:10.1007/978-94-015-8422-7_16, ISBN 978-90-481-4525-6
=== Historical publications ===
On Faraday's Lines of Force – 1855/56. Maxwell's first paper (Part 1 & 2) – Compiled by Blaze Labs Research (PDF).
On Physical Lines of Force – 1861. Maxwell's 1861 paper describing magnetic lines of force – Predecessor to 1873 Treatise.
James Clerk Maxwell, "A Dynamical Theory of the Electromagnetic Field", Philosophical Transactions of the Royal Society of London 155, 459–512 (1865). (This article accompanied a December 8, 1864 presentation by Maxwell to the Royal Society.)
A Dynamical Theory Of The Electromagnetic Field – 1865. Maxwell's 1865 paper describing his 20 equations, link from Google Books.
J. Clerk Maxwell (1873), "A Treatise on Electricity and Magnetism":
Maxwell, J. C., "A Treatise on Electricity And Magnetism" – Volume 1 – 1873 – Posner Memorial Collection – Carnegie Mellon University.
Maxwell, J. C., "A Treatise on Electricity And Magnetism" – Volume 2 – 1873 – Posner Memorial Collection – Carnegie Mellon University.
Developments before the theory of relativity
Larmor Joseph (1897). "On a dynamical theory of the electric and luminiferous medium. Part 3, Relations with material media" . Phil. Trans. R. Soc. 190: 205–300.
Lorentz Hendrik (1899). "Simplified theory of electrical and optical phenomena in moving systems" . Proc. Acad. Science Amsterdam. I: 427–443.
Lorentz Hendrik (1904). "Electromagnetic phenomena in a system moving with any velocity less than that of light" . Proc. Acad. Science Amsterdam. IV: 669–678.
Henri Poincaré (1900) "La théorie de Lorentz et le Principe de Réaction" (in French), Archives Néerlandaises, V, 253–278.
Henri Poincaré (1902) "La Science et l'Hypothèse" (in French).
Henri Poincaré (1905) "Sur la dynamique de l'électron" (in French), Comptes Rendus de l'Académie des Sciences, 140, 1504–1508.
Catt, Walton and Davidson. "The History of Displacement Current" Archived 2008-05-06 at the Wayback Machine. Wireless World, March 1979.
== External links ==
"Maxwell equations", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
maxwells-equations.com — An intuitive tutorial of Maxwell's equations.
The Feynman Lectures on Physics Vol. II Ch. 18: The Maxwell Equations
Wikiversity Page on Maxwell's Equations
=== Modern treatments ===
Electromagnetism (ch. 11), B. Crowell, Fullerton College
Lecture series: Relativity and electromagnetism, R. Fitzpatrick, University of Texas at Austin
Electromagnetic waves from Maxwell's equations on Project PHYSNET.
MIT Video Lecture Series (36 × 50 minute lectures) (in .mp4 format) – Electricity and Magnetism Taught by Professor Walter Lewin.
=== Other ===
Silagadze, Z. K. (2002). "Feynman's derivation of Maxwell equations and extra dimensions". Annales de la Fondation Louis de Broglie. 27: 241–256. arXiv:hep-ph/0106235.
Nature Milestones: Photons – Milestone 2 (1861) Maxwell's equations | Wikipedia/Maxwell_equation |
In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.
The Hamilton–Jacobi equation is a formulation of mechanics in which the motion of a particle can be represented as a wave. In this sense, it fulfilled a long-held goal of theoretical physics (dating at least to Johann Bernoulli in the eighteenth century) of finding an analogy between the propagation of light and the motion of a particle. The wave equation followed by mechanical systems is similar to, but not identical with, the Schrödinger equation, as described below; for this reason, the Hamilton–Jacobi equation is considered the "closest approach" of classical mechanics to quantum mechanics. The qualitative form of this connection is called Hamilton's optico-mechanical analogy.
In mathematics, the Hamilton–Jacobi equation is a necessary condition describing extremal geometry in generalizations of problems from the calculus of variations. It can be understood as a special case of the Hamilton–Jacobi–Bellman equation from dynamic programming.
== Overview ==
The Hamilton–Jacobi equation is a first-order, non-linear partial differential equation
for a system of particles at coordinates
q
{\displaystyle \mathbf {q} }
. The function
H
{\displaystyle H}
is the system's Hamiltonian giving the system's energy. The solution of this equation is the action,
S
{\displaystyle S}
, called Hamilton's principal function.: 291
The solution can be related to the system Lagrangian
L
{\displaystyle \ {\mathcal {L}}\ }
by an indefinite integral of the form used in the principle of least action:: 431
S
=
∫
L
d
t
+
s
o
m
e
c
o
n
s
t
a
n
t
{\displaystyle \ S=\int {\mathcal {L}}\ \mathrm {d} t+~{\mathsf {some\ constant}}~}
Geometrical surfaces of constant action are perpendicular to system trajectories, creating a wavefront-like view of the system dynamics. This property of the Hamilton–Jacobi equation connects classical mechanics to quantum mechanics.: 175
== Mathematical formulation ==
=== Notation ===
Boldface variables such as
q
{\displaystyle \mathbf {q} }
represent a list of
N
{\displaystyle N}
generalized coordinates,
q
=
(
q
1
,
q
2
,
…
,
q
N
−
1
,
q
N
)
{\displaystyle \mathbf {q} =(q_{1},q_{2},\ldots ,q_{N-1},q_{N})}
A dot over a variable or list signifies the time derivative (see Newton's notation). For example,
q
˙
=
d
q
d
t
.
{\displaystyle {\dot {\mathbf {q} }}={\frac {d\mathbf {q} }{dt}}.}
The dot product notation between two lists of the same number of coordinates is a shorthand for the sum of the products of corresponding components, such as
p
⋅
q
=
∑
k
=
1
N
p
k
q
k
.
{\displaystyle \mathbf {p} \cdot \mathbf {q} =\sum _{k=1}^{N}p_{k}q_{k}.}
=== The action functional (a.k.a. Hamilton's principal function) ===
==== Definition ====
Let the Hessian matrix
H
L
(
q
,
q
˙
,
t
)
=
{
∂
2
L
/
∂
q
˙
i
∂
q
˙
j
}
i
j
{\textstyle H_{\mathcal {L}}(\mathbf {q} ,\mathbf {\dot {q}} ,t)=\left\{\partial ^{2}{\mathcal {L}}/\partial {\dot {q}}^{i}\partial {\dot {q}}^{j}\right\}_{ij}}
be invertible. The relation
d
d
t
∂
L
∂
q
˙
i
=
∑
j
=
1
n
(
∂
2
L
∂
q
˙
i
∂
q
˙
j
q
¨
j
+
∂
2
L
∂
q
˙
i
∂
q
j
q
˙
j
)
+
∂
2
L
∂
q
˙
i
∂
t
,
i
=
1
,
…
,
n
,
{\displaystyle {\frac {d}{dt}}{\frac {\partial {\mathcal {L}}}{\partial {\dot {q}}^{i}}}=\sum _{j=1}^{n}\left({\frac {\partial ^{2}{\mathcal {L}}}{\partial {\dot {q}}^{i}\partial {\dot {q}}^{j}}}{\ddot {q}}^{j}+{\frac {\partial ^{2}{\mathcal {L}}}{\partial {\dot {q}}^{i}\partial {q}^{j}}}{\dot {q}}^{j}\right)+{\frac {\partial ^{2}{\mathcal {L}}}{\partial {\dot {q}}^{i}\partial t}},\qquad i=1,\ldots ,n,}
shows that the Euler–Lagrange equations form a
n
×
n
{\displaystyle n\times n}
system of second-order ordinary differential equations. Inverting the matrix
H
L
{\displaystyle H_{\mathcal {L}}}
transforms this system into
q
¨
i
=
F
i
(
q
,
q
˙
,
t
)
,
i
=
1
,
…
,
n
.
{\displaystyle {\ddot {q}}^{i}=F_{i}(\mathbf {q} ,\mathbf {\dot {q}} ,t),\ i=1,\ldots ,n.}
Let a time instant
t
0
{\displaystyle t_{0}}
and a point
q
0
∈
M
{\displaystyle \mathbf {q} _{0}\in M}
in the configuration space be fixed. The existence and uniqueness theorems guarantee that, for every
v
0
,
{\displaystyle \mathbf {v} _{0},}
the initial value problem with the conditions
γ
|
τ
=
t
0
=
q
0
{\displaystyle \gamma |_{\tau =t_{0}}=\mathbf {q} _{0}}
and
γ
˙
|
τ
=
t
0
=
v
0
{\displaystyle {\dot {\gamma }}|_{\tau =t_{0}}=\mathbf {v} _{0}}
has a locally unique solution
γ
=
γ
(
τ
;
t
0
,
q
0
,
v
0
)
.
{\displaystyle \gamma =\gamma (\tau ;t_{0},\mathbf {q} _{0},\mathbf {v} _{0}).}
Additionally, let there be a sufficiently small time interval
(
t
0
,
t
1
)
{\displaystyle (t_{0},t_{1})}
such that extremals with different initial velocities
v
0
{\displaystyle \mathbf {v} _{0}}
would not intersect in
M
×
(
t
0
,
t
1
)
.
{\displaystyle M\times (t_{0},t_{1}).}
The latter means that, for any
q
∈
M
{\displaystyle \mathbf {q} \in M}
and any
t
∈
(
t
0
,
t
1
)
,
{\displaystyle t\in (t_{0},t_{1}),}
there can be at most one extremal
γ
=
γ
(
τ
;
t
,
t
0
,
q
,
q
0
)
{\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})}
for which
γ
|
τ
=
t
0
=
q
0
{\displaystyle \gamma |_{\tau =t_{0}}=\mathbf {q} _{0}}
and
γ
|
τ
=
t
=
q
.
{\displaystyle \gamma |_{\tau =t}=\mathbf {q} .}
Substituting
γ
=
γ
(
τ
;
t
,
t
0
,
q
,
q
0
)
{\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})}
into the action functional results in the Hamilton's principal function (HPF)
where
γ
=
γ
(
τ
;
t
,
t
0
,
q
,
q
0
)
,
{\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0}),}
γ
|
τ
=
t
0
=
q
0
,
{\displaystyle \gamma |_{\tau =t_{0}}=\mathbf {q} _{0},}
γ
|
τ
=
t
=
q
.
{\displaystyle \gamma |_{\tau =t}=\mathbf {q} .}
=== Formula for the momenta ===
The momenta are defined as the quantities
p
i
(
q
,
q
˙
,
t
)
=
∂
L
/
∂
q
˙
i
.
{\textstyle p_{i}(\mathbf {q} ,\mathbf {\dot {q}} ,t)=\partial {\mathcal {L}}/\partial {\dot {q}}^{i}.}
This section shows that the dependency of
p
i
{\displaystyle p_{i}}
on
q
˙
{\displaystyle \mathbf {\dot {q}} }
disappears, once the HPF is known.
Indeed, let a time instant
t
0
{\displaystyle t_{0}}
and a point
q
0
{\displaystyle \mathbf {q} _{0}}
in the configuration space be fixed. For every time instant
t
{\displaystyle t}
and a point
q
,
{\displaystyle \mathbf {q} ,}
let
γ
=
γ
(
τ
;
t
,
t
0
,
q
,
q
0
)
{\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})}
be the (unique) extremal from the definition of the Hamilton's principal function
S
{\displaystyle S}
. Call
v
=
def
γ
˙
(
τ
;
t
,
t
0
,
q
,
q
0
)
|
τ
=
t
{\displaystyle \mathbf {v} \,{\stackrel {\text{def}}{=}}\,{\dot {\gamma }}(\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})|_{\tau =t}}
the velocity at
τ
=
t
{\displaystyle \tau =t}
. Then
=== Formula ===
Given the Hamiltonian
H
(
q
,
p
,
t
)
{\displaystyle H(\mathbf {q} ,\mathbf {p} ,t)}
of a mechanical system, the Hamilton–Jacobi equation is a first-order, non-linear partial differential equation for the Hamilton's principal function
S
{\displaystyle S}
,
Alternatively, as described below, the Hamilton–Jacobi equation may be derived from Hamiltonian mechanics by treating
S
{\displaystyle S}
as the generating function for a canonical transformation of the classical Hamiltonian
H
=
H
(
q
1
,
q
2
,
…
,
q
N
;
p
1
,
p
2
,
…
,
p
N
;
t
)
.
{\displaystyle H=H(q_{1},q_{2},\ldots ,q_{N};p_{1},p_{2},\ldots ,p_{N};t).}
The conjugate momenta correspond to the first derivatives of
S
{\displaystyle S}
with respect to the generalized coordinates
p
k
=
∂
S
∂
q
k
.
{\displaystyle p_{k}={\frac {\partial S}{\partial q_{k}}}.}
As a solution to the Hamilton–Jacobi equation, the principal function contains
N
+
1
{\displaystyle N+1}
undetermined constants, the first
N
{\displaystyle N}
of them denoted as
α
1
,
α
2
,
…
,
α
N
{\displaystyle \alpha _{1},\,\alpha _{2},\dots ,\alpha _{N}}
, and the last one coming from the integration of
∂
S
∂
t
{\displaystyle {\frac {\partial S}{\partial t}}}
.
The relationship between
p
{\displaystyle \mathbf {p} }
and
q
{\displaystyle \mathbf {q} }
then describes the orbit in phase space in terms of these constants of motion. Furthermore, the quantities
β
k
=
∂
S
∂
α
k
,
k
=
1
,
2
,
…
,
N
{\displaystyle \beta _{k}={\frac {\partial S}{\partial \alpha _{k}}},\quad k=1,2,\ldots ,N}
are also constants of motion, and these equations can be inverted to find
q
{\displaystyle \mathbf {q} }
as a function of all the
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
constants and time.
== Comparison with other formulations of mechanics ==
The Hamilton–Jacobi equation is a single, first-order partial differential equation for the function of the
N
{\displaystyle N}
generalized coordinates
q
1
,
q
2
,
…
,
q
N
{\displaystyle q_{1},\,q_{2},\dots ,q_{N}}
and the time
t
{\displaystyle t}
. The generalized momenta do not appear, except as derivatives of
S
{\displaystyle S}
, the classical action.
For comparison, in the equivalent Euler–Lagrange equations of motion of Lagrangian mechanics, the conjugate momenta also do not appear; however, those equations are a system of
N
{\displaystyle N}
, generally second-order equations for the time evolution of the generalized coordinates. Similarly, Hamilton's equations of motion are another system of 2N first-order equations for the time evolution of the generalized coordinates and their conjugate momenta
p
1
,
p
2
,
…
,
p
N
{\displaystyle p_{1},\,p_{2},\dots ,p_{N}}
.
Since the HJE is an equivalent expression of an integral minimization problem such as Hamilton's principle, the HJE can be useful in other problems of the calculus of variations and, more generally, in other branches of mathematics and physics, such as dynamical systems, symplectic geometry and quantum chaos. For example, the Hamilton–Jacobi equations can be used to determine the geodesics on a Riemannian manifold, an important variational problem in Riemannian geometry. However as a computational tool, the partial differential equations are notoriously complicated to solve except when is it possible to separate the independent variables; in this case the HJE become computationally useful.: 444
== Derivation using a canonical transformation ==
Any canonical transformation involving a type-2 generating function
G
2
(
q
,
P
,
t
)
{\displaystyle G_{2}(\mathbf {q} ,\mathbf {P} ,t)}
leads to the relations
p
=
∂
G
2
∂
q
,
Q
=
∂
G
2
∂
P
,
K
(
Q
,
P
,
t
)
=
H
(
q
,
p
,
t
)
+
∂
G
2
∂
t
{\displaystyle {\begin{aligned}&\mathbf {p} ={\frac {\partial G_{2}}{\partial \mathbf {q} }},\quad \mathbf {Q} ={\frac {\partial G_{2}}{\partial \mathbf {P} }},\quad \\&K(\mathbf {Q} ,\mathbf {P} ,t)=H(\mathbf {q} ,\mathbf {p} ,t)+{\frac {\partial G_{2}}{\partial t}}\end{aligned}}}
and Hamilton's equations in terms of the new variables
P
,
Q
{\displaystyle \mathbf {P} ,\,\mathbf {Q} }
and new Hamiltonian
K
{\displaystyle K}
have the same form:
P
˙
=
−
∂
K
∂
Q
,
Q
˙
=
+
∂
K
∂
P
.
{\displaystyle {\dot {\mathbf {P} }}=-{\partial K \over \partial \mathbf {Q} },\quad {\dot {\mathbf {Q} }}=+{\partial K \over \partial \mathbf {P} }.}
To derive the HJE, a generating function
G
2
(
q
,
P
,
t
)
{\displaystyle G_{2}(\mathbf {q} ,\mathbf {P} ,t)}
is chosen in such a way that, it will make the new Hamiltonian
K
=
0
{\displaystyle K=0}
. Hence, all its derivatives are also zero, and the transformed Hamilton's equations become trivial
P
˙
=
Q
˙
=
0
{\displaystyle {\dot {\mathbf {P} }}={\dot {\mathbf {Q} }}=0}
so the new generalized coordinates and momenta are constants of motion. As they are constants, in this context the new generalized momenta
P
{\displaystyle \mathbf {P} }
are usually denoted
α
1
,
α
2
,
…
,
α
N
{\displaystyle \alpha _{1},\,\alpha _{2},\dots ,\alpha _{N}}
, i.e.
P
m
=
α
m
{\displaystyle P_{m}=\alpha _{m}}
and the new generalized coordinates
Q
{\displaystyle \mathbf {Q} }
are typically denoted as
β
1
,
β
2
,
…
,
β
N
{\displaystyle \beta _{1},\,\beta _{2},\dots ,\beta _{N}}
, so
Q
m
=
β
m
{\displaystyle Q_{m}=\beta _{m}}
.
Setting the generating function equal to Hamilton's principal function, plus an arbitrary constant
A
{\displaystyle A}
:
G
2
(
q
,
α
,
t
)
=
S
(
q
,
t
)
+
A
,
{\displaystyle G_{2}(\mathbf {q} ,{\boldsymbol {\alpha }},t)=S(\mathbf {q} ,t)+A,}
the HJE automatically arises
p
=
∂
G
2
∂
q
=
∂
S
∂
q
→
H
(
q
,
p
,
t
)
+
∂
G
2
∂
t
=
0
→
H
(
q
,
∂
S
∂
q
,
t
)
+
∂
S
∂
t
=
0.
{\displaystyle {\begin{aligned}&\mathbf {p} ={\frac {\partial G_{2}}{\partial \mathbf {q} }}={\frac {\partial S}{\partial \mathbf {q} }}\\[1ex]\rightarrow {}&H(\mathbf {q} ,\mathbf {p} ,t)+{\partial G_{2} \over \partial t}=0\\[1ex]\rightarrow {}&H{\left(\mathbf {q} ,{\frac {\partial S}{\partial \mathbf {q} }},t\right)}+{\partial S \over \partial t}=0.\end{aligned}}}
When solved for
S
(
q
,
α
,
t
)
{\displaystyle S(\mathbf {q} ,{\boldsymbol {\alpha }},t)}
, these also give us the useful equations
Q
=
β
=
∂
S
∂
α
,
{\displaystyle \mathbf {Q} ={\boldsymbol {\beta }}={\partial S \over \partial {\boldsymbol {\alpha }}},}
or written in components for clarity
Q
m
=
β
m
=
∂
S
(
q
,
α
,
t
)
∂
α
m
.
{\displaystyle Q_{m}=\beta _{m}={\frac {\partial S(\mathbf {q} ,{\boldsymbol {\alpha }},t)}{\partial \alpha _{m}}}.}
Ideally, these N equations can be inverted to find the original generalized coordinates
q
{\displaystyle \mathbf {q} }
as a function of the constants
α
,
β
,
{\displaystyle {\boldsymbol {\alpha }},\,{\boldsymbol {\beta }},}
and
t
{\displaystyle t}
, thus solving the original problem.
== Separation of variables ==
When the problem allows additive separation of variables, the HJE leads directly to constants of motion. For example, the time t can be separated if the Hamiltonian does not depend on time explicitly. In that case, the time derivative
∂
S
∂
t
{\displaystyle {\frac {\partial S}{\partial t}}}
in the HJE must be a constant, usually denoted (
−
E
{\displaystyle -E}
), giving the separated solution
S
=
W
(
q
1
,
q
2
,
…
,
q
N
)
−
E
t
{\displaystyle S=W(q_{1},q_{2},\ldots ,q_{N})-Et}
where the time-independent function
W
(
q
)
{\displaystyle W(\mathbf {q} )}
is sometimes called the abbreviated action or Hamilton's characteristic function : 434 and sometimes: 607 written
S
0
{\displaystyle S_{0}}
(see action principle names). The reduced Hamilton–Jacobi equation can then be written
H
(
q
,
∂
S
∂
q
)
=
E
.
{\displaystyle H{\left(\mathbf {q} ,{\frac {\partial S}{\partial \mathbf {q} }}\right)}=E.}
To illustrate separability for other variables, a certain generalized coordinate
q
k
{\displaystyle q_{k}}
and its derivative
∂
S
∂
q
k
{\displaystyle {\frac {\partial S}{\partial q_{k}}}}
are assumed to appear together as a single function
ψ
(
q
k
,
∂
S
∂
q
k
)
{\displaystyle \psi {\left(q_{k},{\frac {\partial S}{\partial q_{k}}}\right)}}
in the Hamiltonian
H
=
H
(
q
1
,
q
2
,
…
,
q
k
−
1
,
q
k
+
1
,
…
,
q
N
;
p
1
,
p
2
,
…
,
p
k
−
1
,
p
k
+
1
,
…
,
p
N
;
ψ
;
t
)
.
{\displaystyle H=H(q_{1},q_{2},\ldots ,q_{k-1},q_{k+1},\ldots ,q_{N};p_{1},p_{2},\ldots ,p_{k-1},p_{k+1},\ldots ,p_{N};\psi ;t).}
In that case, the function S can be partitioned into two functions, one that depends only on qk and another that depends only on the remaining generalized coordinates
S
=
S
k
(
q
k
)
+
S
rem
(
q
1
,
…
,
q
k
−
1
,
q
k
+
1
,
…
,
q
N
,
t
)
.
{\displaystyle S=S_{k}(q_{k})+S_{\text{rem}}(q_{1},\ldots ,q_{k-1},q_{k+1},\ldots ,q_{N},t).}
Substitution of these formulae into the Hamilton–Jacobi equation shows that the function ψ must be a constant (denoted here as
Γ
k
{\displaystyle \Gamma _{k}}
), yielding a first-order ordinary differential equation for
S
k
(
q
k
)
,
{\displaystyle S_{k}(q_{k}),}
ψ
(
q
k
,
d
S
k
d
q
k
)
=
Γ
k
.
{\displaystyle \psi {\left(q_{k},{\frac {dS_{k}}{dq_{k}}}\right)}=\Gamma _{k}.}
In fortunate cases, the function
S
{\displaystyle S}
can be separated completely into
N
{\displaystyle N}
functions
S
m
(
q
m
)
,
{\displaystyle S_{m}(q_{m}),}
S
=
S
1
(
q
1
)
+
S
2
(
q
2
)
+
⋯
+
S
N
(
q
N
)
−
E
t
.
{\displaystyle S=S_{1}(q_{1})+S_{2}(q_{2})+\cdots +S_{N}(q_{N})-Et.}
In such a case, the problem devolves to
N
{\displaystyle N}
ordinary differential equations.
The separability of S depends both on the Hamiltonian and on the choice of generalized coordinates. For orthogonal coordinates and Hamiltonians that have no time dependence and are quadratic in the generalized momenta,
S
{\displaystyle S}
will be completely separable if the potential energy is additively separable in each coordinate, where the potential energy term for each coordinate is multiplied by the coordinate-dependent factor in the corresponding momentum term of the Hamiltonian (the Staeckel conditions). For illustration, several examples in orthogonal coordinates are worked in the next sections.
=== Examples in various coordinate systems ===
==== Spherical coordinates ====
In spherical coordinates the Hamiltonian of a free particle moving in a conservative potential U can be written
H
=
1
2
m
[
p
r
2
+
p
θ
2
r
2
+
p
ϕ
2
r
2
sin
2
θ
]
+
U
(
r
,
θ
,
ϕ
)
.
{\displaystyle H={\frac {1}{2m}}\left[p_{r}^{2}+{\frac {p_{\theta }^{2}}{r^{2}}}+{\frac {p_{\phi }^{2}}{r^{2}\sin ^{2}\theta }}\right]+U(r,\theta ,\phi ).}
The Hamilton–Jacobi equation is completely separable in these coordinates provided that there exist functions
U
r
(
r
)
,
U
θ
(
θ
)
,
U
ϕ
(
ϕ
)
{\displaystyle U_{r}(r),U_{\theta }(\theta ),U_{\phi }(\phi )}
such that
U
{\displaystyle U}
can be written in the analogous form
U
(
r
,
θ
,
ϕ
)
=
U
r
(
r
)
+
U
θ
(
θ
)
r
2
+
U
ϕ
(
ϕ
)
r
2
sin
2
θ
.
{\displaystyle U(r,\theta ,\phi )=U_{r}(r)+{\frac {U_{\theta }(\theta )}{r^{2}}}+{\frac {U_{\phi }(\phi )}{r^{2}\sin ^{2}\theta }}.}
Substitution of the completely separated solution
S
=
S
r
(
r
)
+
S
θ
(
θ
)
+
S
ϕ
(
ϕ
)
−
E
t
{\displaystyle S=S_{r}(r)+S_{\theta }(\theta )+S_{\phi }(\phi )-Et}
into the HJE yields
1
2
m
(
d
S
r
d
r
)
2
+
U
r
(
r
)
+
1
2
m
r
2
[
(
d
S
θ
d
θ
)
2
+
2
m
U
θ
(
θ
)
]
+
1
2
m
r
2
sin
2
θ
[
(
d
S
ϕ
d
ϕ
)
2
+
2
m
U
ϕ
(
ϕ
)
]
=
E
.
{\displaystyle {\frac {1}{2m}}\left({\frac {dS_{r}}{dr}}\right)^{2}+U_{r}(r)+{\frac {1}{2mr^{2}}}\left[\left({\frac {dS_{\theta }}{d\theta }}\right)^{2}+2mU_{\theta }(\theta )\right]+{\frac {1}{2mr^{2}\sin ^{2}\theta }}\left[\left({\frac {dS_{\phi }}{d\phi }}\right)^{2}+2mU_{\phi }(\phi )\right]=E.}
This equation may be solved by successive integrations of ordinary differential equations, beginning with the equation for
ϕ
{\displaystyle \phi }
(
d
S
ϕ
d
ϕ
)
2
+
2
m
U
ϕ
(
ϕ
)
=
Γ
ϕ
{\displaystyle \left({\frac {dS_{\phi }}{d\phi }}\right)^{2}+2mU_{\phi }(\phi )=\Gamma _{\phi }}
where
Γ
ϕ
{\displaystyle \Gamma _{\phi }}
is a constant of the motion that eliminates the
ϕ
{\displaystyle \phi }
dependence from the Hamilton–Jacobi equation
1
2
m
(
d
S
r
d
r
)
2
+
U
r
(
r
)
+
1
2
m
r
2
[
1
sin
2
θ
(
d
S
θ
d
θ
)
2
+
2
m
sin
2
θ
U
θ
(
θ
)
+
Γ
ϕ
]
=
E
.
{\displaystyle {\frac {1}{2m}}\left({\frac {dS_{r}}{dr}}\right)^{2}+U_{r}(r)+{\frac {1}{2mr^{2}}}\left[{\frac {1}{\sin ^{2}\theta }}\left({\frac {dS_{\theta }}{d\theta }}\right)^{2}+{\frac {2m}{\sin ^{2}\theta }}U_{\theta }(\theta )+\Gamma _{\phi }\right]=E.}
The next ordinary differential equation involves the
θ
{\displaystyle \theta }
generalized coordinate
1
sin
2
θ
(
d
S
θ
d
θ
)
2
+
2
m
sin
2
θ
U
θ
(
θ
)
+
Γ
ϕ
=
Γ
θ
{\displaystyle {\frac {1}{\sin ^{2}\theta }}\left({\frac {dS_{\theta }}{d\theta }}\right)^{2}+{\frac {2m}{\sin ^{2}\theta }}U_{\theta }(\theta )+\Gamma _{\phi }=\Gamma _{\theta }}
where
Γ
θ
{\displaystyle \Gamma _{\theta }}
is again a constant of the motion that eliminates the
θ
{\displaystyle \theta }
dependence and reduces the HJE to the final ordinary differential equation
1
2
m
(
d
S
r
d
r
)
2
+
U
r
(
r
)
+
Γ
θ
2
m
r
2
=
E
{\displaystyle {\frac {1}{2m}}\left({\frac {dS_{r}}{dr}}\right)^{2}+U_{r}(r)+{\frac {\Gamma _{\theta }}{2mr^{2}}}=E}
whose integration completes the solution for
S
{\displaystyle S}
.
==== Elliptic cylindrical coordinates ====
The Hamiltonian in elliptic cylindrical coordinates can be written
H
=
p
μ
2
+
p
ν
2
2
m
a
2
(
sinh
2
μ
+
sin
2
ν
)
+
p
z
2
2
m
+
U
(
μ
,
ν
,
z
)
{\displaystyle H={\frac {p_{\mu }^{2}+p_{\nu }^{2}}{2ma^{2}\left(\sinh ^{2}\mu +\sin ^{2}\nu \right)}}+{\frac {p_{z}^{2}}{2m}}+U(\mu ,\nu ,z)}
where the foci of the ellipses are located at
±
a
{\displaystyle \pm a}
on the
x
{\displaystyle x}
-axis. The Hamilton–Jacobi equation is completely separable in these coordinates provided that
U
{\displaystyle U}
has an analogous form
U
(
μ
,
ν
,
z
)
=
U
μ
(
μ
)
+
U
ν
(
ν
)
sinh
2
μ
+
sin
2
ν
+
U
z
(
z
)
{\displaystyle U(\mu ,\nu ,z)={\frac {U_{\mu }(\mu )+U_{\nu }(\nu )}{\sinh ^{2}\mu +\sin ^{2}\nu }}+U_{z}(z)}
where
U
μ
(
μ
)
{\displaystyle U_{\mu }(\mu )}
,
U
ν
(
ν
)
{\displaystyle U_{\nu }(\nu )}
and
U
z
(
z
)
{\displaystyle U_{z}(z)}
are arbitrary functions. Substitution of the completely separated solution
S
=
S
μ
(
μ
)
+
S
ν
(
ν
)
+
S
z
(
z
)
−
E
t
{\displaystyle S=S_{\mu }(\mu )+S_{\nu }(\nu )+S_{z}(z)-Et}
into the HJE yields
1
2
m
(
d
S
z
d
z
)
2
+
1
2
m
a
2
(
sinh
2
μ
+
sin
2
ν
)
[
(
d
S
μ
d
μ
)
2
+
(
d
S
ν
d
ν
)
2
]
+
U
z
(
z
)
+
1
sinh
2
μ
+
sin
2
ν
[
U
μ
(
μ
)
+
U
ν
(
ν
)
]
=
E
.
{\displaystyle {\begin{aligned}{\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+{\frac {1}{2ma^{2}\left(\sinh ^{2}\mu +\sin ^{2}\nu \right)}}\left[\left({\frac {dS_{\mu }}{d\mu }}\right)^{2}+\left({\frac {dS_{\nu }}{d\nu }}\right)^{2}\right]&\\{}+U_{z}(z)+{\frac {1}{\sinh ^{2}\mu +\sin ^{2}\nu }}\left[U_{\mu }(\mu )+U_{\nu }(\nu )\right]&=E.\end{aligned}}}
Separating the first ordinary differential equation
1
2
m
(
d
S
z
d
z
)
2
+
U
z
(
z
)
=
Γ
z
{\displaystyle {\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+U_{z}(z)=\Gamma _{z}}
yields the reduced Hamilton–Jacobi equation (after re-arrangement and multiplication of both sides by the denominator)
(
d
S
μ
d
μ
)
2
+
(
d
S
ν
d
ν
)
2
+
2
m
a
2
U
μ
(
μ
)
+
2
m
a
2
U
ν
(
ν
)
=
2
m
a
2
(
sinh
2
μ
+
sin
2
ν
)
(
E
−
Γ
z
)
{\displaystyle \left({\frac {dS_{\mu }}{d\mu }}\right)^{2}+\left({\frac {dS_{\nu }}{d\nu }}\right)^{2}+2ma^{2}U_{\mu }(\mu )+2ma^{2}U_{\nu }(\nu )=2ma^{2}\left(\sinh ^{2}\mu +\sin ^{2}\nu \right)\left(E-\Gamma _{z}\right)}
which itself may be separated into two independent ordinary differential equations
(
d
S
μ
d
μ
)
2
+
2
m
a
2
U
μ
(
μ
)
+
2
m
a
2
(
Γ
z
−
E
)
sinh
2
μ
=
Γ
μ
(
d
S
ν
d
ν
)
2
+
2
m
a
2
U
ν
(
ν
)
+
2
m
a
2
(
Γ
z
−
E
)
sin
2
ν
=
Γ
ν
{\displaystyle {\begin{alignedat}{4}\left({\frac {dS_{\mu }}{d\mu }}\right)^{2}&\,+\,&2ma^{2}U_{\mu }(\mu )&\,+\,&2ma^{2}\left(\Gamma _{z}-E\right)\sinh ^{2}\mu &=\,&\Gamma _{\mu }\\\left({\frac {dS_{\nu }}{d\nu }}\right)^{2}&\,+\,&2ma^{2}U_{\nu }(\nu )&\,+\,&2ma^{2}\left(\Gamma _{z}-E\right)\sin ^{2}\nu &=\,&\Gamma _{\nu }\end{alignedat}}}
that, when solved, provide a complete solution for
S
{\displaystyle S}
.
==== Parabolic cylindrical coordinates ====
The Hamiltonian in parabolic cylindrical coordinates can be written
H
=
p
σ
2
+
p
τ
2
2
m
(
σ
2
+
τ
2
)
+
p
z
2
2
m
+
U
(
σ
,
τ
,
z
)
.
{\displaystyle H={\frac {p_{\sigma }^{2}+p_{\tau }^{2}}{2m\left(\sigma ^{2}+\tau ^{2}\right)}}+{\frac {p_{z}^{2}}{2m}}+U(\sigma ,\tau ,z).}
The Hamilton–Jacobi equation is completely separable in these coordinates provided that
U
{\displaystyle U}
has an analogous form
U
(
σ
,
τ
,
z
)
=
U
σ
(
σ
)
+
U
τ
(
τ
)
σ
2
+
τ
2
+
U
z
(
z
)
{\displaystyle U(\sigma ,\tau ,z)={\frac {U_{\sigma }(\sigma )+U_{\tau }(\tau )}{\sigma ^{2}+\tau ^{2}}}+U_{z}(z)}
where
U
σ
(
σ
)
{\displaystyle U_{\sigma }(\sigma )}
,
U
τ
(
τ
)
{\displaystyle U_{\tau }(\tau )}
, and
U
z
(
z
)
{\displaystyle U_{z}(z)}
are arbitrary functions. Substitution of the completely separated solution
S
=
S
σ
(
σ
)
+
S
τ
(
τ
)
+
S
z
(
z
)
−
E
t
+
constant
{\displaystyle S=S_{\sigma }(\sigma )+S_{\tau }(\tau )+S_{z}(z)-Et+{\text{constant}}}
into the HJE yields
1
2
m
(
d
S
z
d
z
)
2
+
1
2
m
(
σ
2
+
τ
2
)
[
(
d
S
σ
d
σ
)
2
+
(
d
S
τ
d
τ
)
2
]
+
U
z
(
z
)
+
1
σ
2
+
τ
2
[
U
σ
(
σ
)
+
U
τ
(
τ
)
]
=
E
.
{\displaystyle {\begin{aligned}{\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+{\frac {1}{2m\left(\sigma ^{2}+\tau ^{2}\right)}}\left[\left({\frac {dS_{\sigma }}{d\sigma }}\right)^{2}+\left({\frac {dS_{\tau }}{d\tau }}\right)^{2}\right]&\\{}+U_{z}(z)+{\frac {1}{\sigma ^{2}+\tau ^{2}}}\left[U_{\sigma }(\sigma )+U_{\tau }(\tau )\right]&=E.\end{aligned}}}
Separating the first ordinary differential equation
1
2
m
(
d
S
z
d
z
)
2
+
U
z
(
z
)
=
Γ
z
{\displaystyle {\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+U_{z}(z)=\Gamma _{z}}
yields the reduced Hamilton–Jacobi equation (after re-arrangement and multiplication of both sides by the denominator)
(
d
S
σ
d
σ
)
2
+
(
d
S
τ
d
τ
)
2
+
2
m
[
U
σ
(
σ
)
+
U
τ
(
τ
)
]
=
2
m
(
σ
2
+
τ
2
)
(
E
−
Γ
z
)
{\displaystyle \left({\frac {dS_{\sigma }}{d\sigma }}\right)^{2}+\left({\frac {dS_{\tau }}{d\tau }}\right)^{2}+2m\left[U_{\sigma }(\sigma )+U_{\tau }(\tau )\right]=2m\left(\sigma ^{2}+\tau ^{2}\right)\left(E-\Gamma _{z}\right)}
which itself may be separated into two independent ordinary differential equations
(
d
S
σ
d
σ
)
2
+
2
m
U
σ
(
σ
)
+
2
m
σ
2
(
Γ
z
−
E
)
=
Γ
σ
(
d
S
τ
d
τ
)
2
+
2
m
U
τ
(
τ
)
+
2
m
τ
2
(
Γ
z
−
E
)
=
Γ
τ
{\displaystyle {\begin{alignedat}{4}\left({\frac {dS_{\sigma }}{d\sigma }}\right)^{2}&+\,&2mU_{\sigma }(\sigma )&+\,&2m\sigma ^{2}\left(\Gamma _{z}-E\right)&=\,&\Gamma _{\sigma }\\\left({\frac {dS_{\tau }}{d\tau }}\right)^{2}&+\,&2mU_{\tau }(\tau )&+\,&2m\tau ^{2}\left(\Gamma _{z}-E\right)&=\,&\Gamma _{\tau }\end{alignedat}}}
that, when solved, provide a complete solution for
S
{\displaystyle S}
.
== Waves and particles ==
=== Optical wave fronts and trajectories ===
The HJE establishes a duality between trajectories and wavefronts. For example, in geometrical optics, light can be considered either as “rays” or waves. The wave front can be defined as the surface
C
t
{\textstyle {\mathcal {C}}_{t}}
that the light emitted at time
t
=
0
{\textstyle t=0}
has reached at time
t
{\textstyle t}
. Light rays and wave fronts are dual: if one is known, the other can be deduced.
More precisely, geometrical optics is a variational problem where the “action” is the travel time
T
{\textstyle T}
along a path,
T
=
1
c
∫
A
B
n
d
s
{\displaystyle T={\frac {1}{c}}\int _{A}^{B}n\,ds}
where
n
{\textstyle n}
is the medium's index of refraction and
d
s
{\textstyle ds}
is an infinitesimal arc length. From the above formulation, one can compute the ray paths using the Euler–Lagrange formulation; alternatively, one can compute the wave fronts by solving the Hamilton–Jacobi equation. Knowing one leads to knowing the other.
The above duality is very general and applies to all systems that derive from a variational principle: either compute the trajectories using Euler–Lagrange equations or the wave fronts by using Hamilton–Jacobi equation.
The wave front at time
t
{\textstyle t}
, for a system initially at
q
0
{\textstyle \mathbf {q} _{0}}
at time
t
0
{\textstyle t_{0}}
, is defined as the collection of points
q
{\textstyle \mathbf {q} }
such that
S
(
q
,
t
)
=
const
{\textstyle S(\mathbf {q} ,t)={\text{const}}}
. If
S
(
q
,
t
)
{\textstyle S(\mathbf {q} ,t)}
is known, the momentum is immediately deduced.
p
=
∂
S
∂
q
.
{\displaystyle \mathbf {p} ={\frac {\partial S}{\partial \mathbf {q} }}.}
Once
p
{\textstyle \mathbf {p} }
is known, tangents to the trajectories
q
˙
{\textstyle {\dot {\mathbf {q} }}}
are computed by solving the equation
∂
L
∂
q
˙
=
p
{\displaystyle {\frac {\partial {\mathcal {L}}}{\partial {\dot {\mathbf {q} }}}}={\boldsymbol {p}}}
for
q
˙
{\textstyle {\dot {\mathbf {q} }}}
, where
L
{\textstyle {\mathcal {L}}}
is the Lagrangian. The trajectories are then recovered from the knowledge of
q
˙
{\textstyle {\dot {\mathbf {q} }}}
.
=== Relationship to the Schrödinger equation ===
The isosurfaces of the function
S
(
q
,
t
)
{\displaystyle S(\mathbf {q} ,t)}
can be determined at any time t. The motion of an
S
{\displaystyle S}
-isosurface as a function of time is defined by the motions of the particles beginning at the points
q
{\displaystyle \mathbf {q} }
on the isosurface. The motion of such an isosurface can be thought of as a wave moving through
q
{\displaystyle \mathbf {q} }
-space, although it does not obey the wave equation exactly. To show this, let S represent the phase of a wave
ψ
=
ψ
0
e
i
S
/
ℏ
{\displaystyle \psi =\psi _{0}e^{iS/\hbar }}
where
ℏ
{\displaystyle \hbar }
is a constant (the Planck constant) introduced to make the exponential argument dimensionless; changes in the amplitude of the wave can be represented by having
S
{\displaystyle S}
be a complex number. The Hamilton–Jacobi equation is then rewritten as
ℏ
2
2
m
∇
2
ψ
−
U
ψ
=
ℏ
i
∂
ψ
∂
t
{\displaystyle {\frac {\hbar ^{2}}{2m}}\nabla ^{2}\psi -U\psi ={\frac {\hbar }{i}}{\frac {\partial \psi }{\partial t}}}
which is the Schrödinger equation.
Conversely, starting with the Schrödinger equation and our ansatz for
ψ
{\displaystyle \psi }
, it can be deduced that
1
2
m
(
∇
S
)
2
+
U
+
∂
S
∂
t
=
i
ℏ
2
m
∇
2
ψ
0
ψ
0
.
{\displaystyle {\frac {1}{2m}}\left(\nabla S\right)^{2}+U+{\frac {\partial S}{\partial t}}={\frac {i\hbar }{2m}}{\frac {\nabla ^{2}\psi _{0}}{\psi _{0}}}.}
The classical limit (
ℏ
→
0
{\displaystyle \hbar \rightarrow 0}
) of the Schrödinger equation above becomes identical to the following variant of the Hamilton–Jacobi equation,
1
2
m
(
∇
S
)
2
+
U
+
∂
S
∂
t
=
0.
{\displaystyle {\frac {1}{2m}}\left(\nabla S\right)^{2}+U+{\frac {\partial S}{\partial t}}=0.}
== Applications ==
=== HJE in a gravitational field ===
Using the energy–momentum relation in the form
g
α
β
P
α
P
β
−
(
m
c
)
2
=
0
{\displaystyle g^{\alpha \beta }P_{\alpha }P_{\beta }-(mc)^{2}=0}
for a particle of rest mass
m
{\displaystyle m}
travelling in curved space, where
g
α
β
{\displaystyle g^{\alpha \beta }}
are the contravariant coordinates of the metric tensor (i.e., the inverse metric) solved from the Einstein field equations, and
c
{\displaystyle c}
is the speed of light. Setting the four-momentum
P
α
{\displaystyle P_{\alpha }}
equal to the four-gradient of the action
S
{\displaystyle S}
,
P
α
=
−
∂
S
∂
x
α
{\displaystyle P_{\alpha }=-{\frac {\partial S}{\partial x^{\alpha }}}}
gives the Hamilton–Jacobi equation in the geometry determined by the metric
g
{\displaystyle g}
:
g
α
β
∂
S
∂
x
α
∂
S
∂
x
β
−
(
m
c
)
2
=
0
,
{\displaystyle g^{\alpha \beta }{\frac {\partial S}{\partial x^{\alpha }}}{\frac {\partial S}{\partial x^{\beta }}}-(mc)^{2}=0,}
in other words, in a gravitational field.
=== HJE in electromagnetic fields ===
For a particle of rest mass
m
{\displaystyle m}
and electric charge
e
{\displaystyle e}
moving in electromagnetic field with four-potential
A
i
=
(
ϕ
,
A
)
{\displaystyle A_{i}=(\phi ,\mathrm {A} )}
in vacuum, the Hamilton–Jacobi equation in geometry determined by the metric tensor
g
i
k
=
g
i
k
{\displaystyle g^{ik}=g_{ik}}
has a form
g
i
k
(
∂
S
∂
x
i
+
e
c
A
i
)
(
∂
S
∂
x
k
+
e
c
A
k
)
=
m
2
c
2
{\displaystyle g^{ik}\left({\frac {\partial S}{\partial x^{i}}}+{\frac {e}{c}}A_{i}\right)\left({\frac {\partial S}{\partial x^{k}}}+{\frac {e}{c}}A_{k}\right)=m^{2}c^{2}}
and can be solved for the Hamilton principal action function
S
{\displaystyle S}
to obtain further solution for the particle trajectory and momentum:
x
=
−
e
c
γ
∫
A
z
d
ξ
,
y
=
−
e
c
γ
∫
A
y
d
ξ
,
z
=
−
e
2
2
c
2
γ
2
∫
(
A
2
−
A
2
¯
)
d
ξ
,
ξ
=
c
t
−
e
2
2
γ
2
c
2
∫
(
A
2
−
A
2
¯
)
d
ξ
,
p
x
=
−
e
c
A
x
,
p
y
=
−
e
c
A
y
,
p
z
=
e
2
2
γ
c
(
A
2
−
A
2
¯
)
,
E
=
c
γ
+
e
2
2
γ
c
(
A
2
−
A
2
¯
)
,
{\displaystyle {\begin{aligned}x&=-{\frac {e}{c\gamma }}\int A_{z}\,d\xi ,&y&=-{\frac {e}{c\gamma }}\int A_{y}\,d\xi ,\\[1ex]z&=-{\frac {e^{2}}{2c^{2}\gamma ^{2}}}\int \left(\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}}\right)\,d\xi ,&\xi &=ct-{\frac {e^{2}}{2\gamma ^{2}c^{2}}}\int \left(\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}}\right)\,d\xi ,\\[1ex]p_{x}&=-{\frac {e}{c}}A_{x},&p_{y}&=-{\frac {e}{c}}A_{y},\\[1ex]p_{z}&={\frac {e^{2}}{2\gamma c}}\left(\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}}\right),&{\mathcal {E}}&=c\gamma +{\frac {e^{2}}{2\gamma c}}\left(\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}}\right),\end{aligned}}}
where
ξ
=
c
t
−
z
{\displaystyle \xi =ct-z}
and
γ
2
=
m
2
c
2
+
e
2
c
2
A
¯
2
{\displaystyle \gamma ^{2}=m^{2}c^{2}+{\frac {e^{2}}{c^{2}}}{\overline {A}}^{2}}
with
A
¯
{\displaystyle {\overline {\mathbf {A} }}}
the cycle average of the vector potential.
==== A circularly polarized wave ====
In the case of circular polarization,
E
x
=
E
0
sin
ω
ξ
1
,
E
y
=
E
0
cos
ω
ξ
1
,
A
x
=
c
E
0
ω
cos
ω
ξ
1
,
A
y
=
−
c
E
0
ω
sin
ω
ξ
1
.
{\displaystyle {\begin{aligned}E_{x}&=E_{0}\sin \omega \xi _{1},&E_{y}&=E_{0}\cos \omega \xi _{1},\\[1ex]A_{x}&={\frac {cE_{0}}{\omega }}\cos \omega \xi _{1},&A_{y}&=-{\frac {cE_{0}}{\omega }}\sin \omega \xi _{1}.\end{aligned}}}
Hence
x
=
−
e
c
E
0
ω
sin
ω
ξ
1
,
y
=
−
e
c
E
0
ω
cos
ω
ξ
1
,
p
x
=
−
e
E
0
ω
cos
ω
ξ
1
,
p
y
=
e
E
0
ω
sin
ω
ξ
1
,
{\displaystyle {\begin{aligned}x&=-{\frac {ecE_{0}}{\omega }}\sin \omega \xi _{1},&y&=-{\frac {ecE_{0}}{\omega }}\cos \omega \xi _{1},\\[1ex]p_{x}&=-{\frac {eE_{0}}{\omega }}\cos \omega \xi _{1},&p_{y}&={\frac {eE_{0}}{\omega }}\sin \omega \xi _{1},\end{aligned}}}
where
ξ
1
=
ξ
/
c
{\displaystyle \xi _{1}=\xi /c}
, implying the particle moving along a circular trajectory with a permanent radius
e
c
E
0
/
γ
ω
2
{\displaystyle ecE_{0}/\gamma \omega ^{2}}
and an invariable value of momentum
e
E
0
/
ω
2
{\displaystyle eE_{0}/\omega ^{2}}
directed along a magnetic field vector.
==== A monochromatic linearly polarized plane wave ====
For the flat, monochromatic, linearly polarized wave with a field
E
{\displaystyle E}
directed along the axis
y
{\displaystyle y}
E
y
=
E
0
cos
ω
ξ
1
,
A
y
=
−
c
E
0
ω
sin
ω
ξ
1
,
{\displaystyle {\begin{aligned}E_{y}&=E_{0}\cos \omega \xi _{1},&A_{y}&=-{\frac {cE_{0}}{\omega }}\sin \omega \xi _{1},\end{aligned}}}
hence
x
=
const
,
y
=
y
0
cos
ω
ξ
1
,
y
0
=
−
e
c
E
0
γ
ω
2
,
z
=
C
z
y
0
sin
2
ω
ξ
1
,
C
z
=
e
E
0
8
γ
ω
,
γ
2
=
m
2
c
2
+
e
2
E
0
2
2
ω
2
,
{\displaystyle {\begin{aligned}x&={\text{const}},\\[1ex]y&=y_{0}\cos \omega \xi _{1},&y_{0}&=-{\frac {ecE_{0}}{\gamma \omega ^{2}}},\\[1ex]z&=C_{z}y_{0}\sin 2\omega \xi _{1},&C_{z}&={\frac {eE_{0}}{8\gamma \omega }},\\[1ex]\gamma ^{2}&=m^{2}c^{2}+{\frac {e^{2}E_{0}^{2}}{2\omega ^{2}}},\end{aligned}}}
p
x
=
0
,
p
y
=
p
y
,
0
sin
ω
ξ
1
,
p
y
,
0
=
e
E
0
ω
,
p
z
=
−
2
C
z
p
y
,
0
cos
2
ω
ξ
1
{\displaystyle {\begin{aligned}p_{x}&=0,\\[1ex]p_{y}&=p_{y,0}\sin \omega \xi _{1},&p_{y,0}&={\frac {eE_{0}}{\omega }},\\[1ex]p_{z}&=-2C_{z}p_{y,0}\cos 2\omega \xi _{1}\end{aligned}}}
implying the particle figure-8 trajectory with a long its axis oriented along the electric field
E
{\displaystyle E}
vector.
==== An electromagnetic wave with a solenoidal magnetic field ====
For the electromagnetic wave with axial (solenoidal) magnetic field:
E
=
E
ϕ
=
ω
ρ
0
c
B
0
cos
ω
ξ
1
,
{\displaystyle E=E_{\phi }={\frac {\omega \rho _{0}}{c}}B_{0}\cos \omega \xi _{1},}
A
ϕ
=
−
ρ
0
B
0
sin
ω
ξ
1
=
−
L
s
π
ρ
0
N
s
I
0
sin
ω
ξ
1
,
{\displaystyle A_{\phi }=-\rho _{0}B_{0}\sin \omega \xi _{1}=-{\frac {L_{s}}{\pi \rho _{0}N_{s}}}I_{0}\sin \omega \xi _{1},}
hence
x
=
constant
,
y
=
y
0
cos
ω
ξ
1
,
y
0
=
−
e
ρ
0
B
0
γ
ω
,
z
=
C
z
y
0
sin
2
ω
ξ
1
,
C
z
=
e
ρ
0
B
0
8
c
γ
,
γ
2
=
m
2
c
2
+
e
2
ρ
0
2
B
0
2
2
c
2
,
{\displaystyle {\begin{aligned}x&={\text{constant}},\\y&=y_{0}\cos \omega \xi _{1},&y_{0}&=-{\frac {e\rho _{0}B_{0}}{\gamma \omega }},\\z&=C_{z}y_{0}\sin 2\omega \xi _{1},&C_{z}&={\frac {e\rho _{0}B_{0}}{8c\gamma }},\\\gamma ^{2}&=m^{2}c^{2}+{\frac {e^{2}\rho _{0}^{2}B_{0}^{2}}{2c^{2}}},\end{aligned}}}
p
x
=
0
,
p
y
=
p
y
,
0
sin
ω
ξ
1
,
p
y
,
0
=
e
ρ
0
B
0
c
,
p
z
=
−
2
C
z
p
y
,
0
cos
2
ω
ξ
1
,
{\displaystyle {\begin{aligned}p_{x}&=0,\\p_{y}&=p_{y,0}\sin \omega \xi _{1},&p_{y,0}&={\frac {e\rho _{0}B_{0}}{c}},\\p_{z}&=-2C_{z}p_{y,0}\cos 2\omega \xi _{1},\end{aligned}}}
where
B
0
{\displaystyle B_{0}}
is the magnetic field magnitude in a solenoid with the effective radius
ρ
0
{\displaystyle \rho _{0}}
, inductivity
L
s
{\displaystyle L_{s}}
, number of windings
N
s
{\displaystyle N_{s}}
, and an electric current magnitude
I
0
{\displaystyle I_{0}}
through the solenoid windings. The particle motion occurs along the figure-8 trajectory in
y
z
{\displaystyle yz}
plane set perpendicular to the solenoid axis with arbitrary azimuth angle
φ
{\displaystyle \varphi }
due to axial symmetry of the solenoidal magnetic field.
== See also ==
== References ==
== Further reading ==
Arnold, V.I. (1989). Mathematical Methods of Classical Mechanics (2 ed.). New York: Springer. ISBN 0-387-96890-3.
Hamilton, W. (1833). "On a General Method of Expressing the Paths of Light, and of the Planets, by the Coefficients of a Characteristic Function" (PDF). Dublin University Review: 795–826.
Hamilton, W. (1834). "On the Application to Dynamics of a General Mathematical Method previously Applied to Optics" (PDF). British Association Report: 513–518.
Fetter, A. & Walecka, J. (2003). Theoretical Mechanics of Particles and Continua. Dover Books. ISBN 978-0-486-43261-8.
Landau, L. D.; Lifshitz, E. M. (1975). Mechanics. Amsterdam: Elsevier.
Sakurai, J. J. (1985). Modern Quantum Mechanics. Benjamin/Cummings Publishing. ISBN 978-0-8053-7501-5.
Jacobi, C. G. J. (1884), Vorlesungen über Dynamik, C. G. J. Jacobi's Gesammelte Werke (in German), Berlin: G. Reimer, OL 14009561M
Nakane, Michiyo; Fraser, Craig G. (2002). "The Early History of Hamilton-Jacobi Dynamics". Centaurus. 44 (3–4): 161–227. doi:10.1111/j.1600-0498.2002.tb00613.x. PMID 17357243. | Wikipedia/Hamilton's_principal_function |
Fuzzy Sets and Systems is a peer-reviewed international scientific journal published by Elsevier on behalf of the International Fuzzy Systems Association (IFSA) and was founded in 1978. The editors-in-chief (as of 2010) are Bernard De Baets of the Department of Data Analysis and Mathematical Modelling (at Ghent University in Belgium), Didier Dubois (of IRIT, Université Paul Sabatier in Toulouse, France) and Eyke Hüllermeier (of the Department of Mathematics, Statistics and Computer Science, Ludwig-Maximilians Universität München, Germany). The journal publishes 24 issues a year. Fuzzy Sets and Systems is abstracted and indexed by Scopus and the Science Citation Index. According to the Journal Citation Reports released in 2010, its 2-year impact factor calculated for 2020 is 3.343 and its 5-year impact factor for 2020 is 3.213.
== References ==
== See also ==
Fuzzy control system
Fuzzy Control Language
Fuzzy logic
Fuzzy set | Wikipedia/Fuzzy_Sets_and_Systems |
In mathematics and mathematical biology, the Mackey–Glass equations, named after Michael Mackey and Leon Glass, refer to a family of delay differential equations whose behaviour manages to mimic both healthy and pathological behaviour in certain biological contexts, controlled by the equation's parameters. Originally, they were used to model the variation in the relative quantity of mature cells in the blood. The equations are defined as:
and
where
P
(
t
)
{\displaystyle P(t)}
represents the density of cells over time, and
β
0
,
θ
,
n
,
τ
,
γ
{\displaystyle \beta _{0},\theta ,n,\tau ,\gamma }
are parameters of the equations.
Equation (2), in particular, is notable in dynamical systems since it can result in chaotic attractors with various dimensions.
== Introduction ==
There exist an enormous number of physiological systems that involve or rely on the periodic behaviour of certain subcomponents of the system. For example, many homeostatic processes rely on negative feedback to control the concentration of substances in the blood; breathing, for instance, is promoted by the detection, by the brain, of high CO2 concentration in the blood. One way to model such systems mathematically is with the following simple ordinary differential equation:
y
′
(
t
)
=
k
−
c
y
(
t
)
{\displaystyle y'(t)=k-cy(t)}
where
k
{\displaystyle k}
is the rate at which a "substance" is produced, and
c
{\displaystyle c}
controls how the current level of the substance discourages the continuation of its production. The solutions of this equation can be found via an integrating factor, and have the form:
y
(
t
)
=
k
c
+
f
(
y
0
)
e
−
c
t
{\displaystyle y(t)={\frac {k}{c}}+f(y_{0})e^{-ct}}
where
y
0
{\displaystyle y_{0}}
is any initial condition for the initial value problem.
However, the above model assumes that variations in the substance concentration is detected immediately, which often not the case in physiological systems. In order to ease this problem, Mackey, M.C. & Glass, L. (1977) proposed changing the production rate to a function
k
(
y
(
t
−
τ
)
)
{\displaystyle k(y(t-\tau ))}
of the concentration at an earlier point
t
−
τ
{\displaystyle t-\tau }
in time, in hope that this would better reflect the fact that there is a significant delay before the bone marrow produces and releases mature cells in the blood, after detecting low cell concentration in the blood. By taking the production rate
k
{\displaystyle k}
as being:
β
0
θ
n
θ
n
+
P
(
t
−
τ
)
n
or
β
0
θ
n
P
(
t
−
τ
)
θ
n
+
P
(
t
−
τ
)
n
{\displaystyle {\frac {\beta _{0}\theta ^{n}}{\theta ^{n}+P(t-\tau )^{n}}}~~{\text{ or }}~~{\frac {\beta _{0}\theta ^{n}P(t-\tau )}{\theta ^{n}+P(t-\tau )^{n}}}}
we obtain Equations (1) and (2), respectively. The values used by Mackey, M.C. & Glass, L. (1977) were
γ
=
0.1
{\displaystyle \gamma =0.1}
,
β
0
=
0.2
{\displaystyle \beta _{0}=0.2}
and
n
=
10
{\displaystyle n=10}
, with initial condition
P
(
0
)
=
0.1
{\displaystyle P(0)=0.1}
. The value of
θ
{\displaystyle \theta }
is not relevant for the purpose of analyzing the dynamics of Equation (2), since the change of variable
P
(
t
)
=
θ
⋅
Q
(
t
)
{\displaystyle P(t)=\theta \cdot Q(t)}
reduces the equation to:
Q
′
(
t
)
=
β
0
Q
(
t
−
τ
)
1
+
Q
(
t
−
τ
)
n
−
γ
Q
(
t
)
.
{\displaystyle Q'(t)={\frac {\beta _{0}Q(t-\tau )}{1+Q(t-\tau )^{n}}}-\gamma Q(t).}
This is why, in this context, plots often place
Q
(
t
)
=
P
(
t
)
/
θ
{\displaystyle Q(t)=P(t)/\theta }
in the
y
{\displaystyle y}
-axis.
== Dynamical behaviour ==
It is of interest to study the behaviour of the equation solutions when
τ
{\displaystyle \tau }
is varied, since it represents the time taken by the physiological system to react to the concentration variation of a substance. An increase in this delay can be caused by a pathology, which in turn can result in chaotic solutions for the Mackey–Glass equations, especially Equation (2). When
τ
=
6
{\displaystyle \tau =6}
, we obtain a very regular periodic solution, which can be seen as characterizing "healthy" behaviour; on the other hand, when
τ
=
20
{\displaystyle \tau =20}
the solution gets much more erratic.
The Mackey–Glass attractor can be visualized by plotting the pairs
(
P
(
t
)
,
P
(
t
−
τ
)
)
{\displaystyle (P(t),P(t-\tau ))}
. This is somewhat justified because delay differential equations can (sometimes) be reduced to a system of ordinary differential equations, and also because they are approximately infinite dimensional maps.
== See also ==
Circadian rhythm
Circadian oscillator
Neural oscillation
== References == | Wikipedia/Mackey–Glass_equations |
In the theory of dynamical systems, the exponential map can be used as the evolution function of the discrete nonlinear dynamical system.
== Family ==
The family of exponential functions is called the exponential family.
== Forms ==
There are many forms of these maps, many of which are equivalent under a coordinate transformation. For example two of the most common ones are:
E
c
:
z
→
e
z
+
c
{\displaystyle E_{c}:z\to e^{z}+c}
E
λ
:
z
→
λ
∗
e
z
{\displaystyle E_{\lambda }:z\to \lambda *e^{z}}
The second one can be mapped to the first using the fact that
λ
∗
e
z
.
=
e
z
+
l
n
(
λ
)
{\displaystyle \lambda *e^{z}.=e^{z+ln(\lambda )}}
, so
E
λ
:
z
→
e
z
+
l
n
(
λ
)
{\displaystyle E_{\lambda }:z\to e^{z}+ln(\lambda )}
is the same under the transformation
z
=
z
+
l
n
(
λ
)
{\displaystyle z=z+ln(\lambda )}
. The only difference is that, due to multi-valued properties of exponentiation, there may be a few select cases that can only be found in one version. Similar arguments can be made for many other formulas.
== References == | Wikipedia/Exponential_map_(discrete_dynamical_systems) |
Springer Science+Business Media, commonly known as Springer, is a German multinational publishing company of books, e-books and peer-reviewed journals in science, humanities, technical and medical (STM) publishing.
Originally founded in 1842 in Berlin, it expanded internationally in the 1960s, and through mergers in the 1990s and a sale to venture capitalists it fused with Wolters Kluwer and eventually became part of Springer Nature in 2015. Springer has major offices in Berlin, Heidelberg, Dordrecht, and New York City.
== History ==
Julius Springer founded Springer-Verlag in Berlin in 1842 and his son Ferdinand Springer grew it from a small firm of 4 employees into Germany's then second-largest academic publisher with 65 staff in 1872. In 1964, Springer expanded its business internationally, opening an office in New York City. Offices in Tokyo, Paris, Milan, Hong Kong, and Delhi soon followed.
In 1999, the academic publishing company BertelsmannSpringer was formed after the media and entertainment company Bertelsmann bought a majority stake in Springer-Verlag. In 2003, the British investment groups Cinven and Candover bought BertelsmannSpringer from Bertelsmann. They merged the company in 2004 with the Dutch publisher Kluwer Academic Publishers (successor of D. Reidel, Dr. W. Junk, Plenum Publishers, most of Chapman & Hall, and Baltzer Science Publishers) which they bought from Wolters Kluwer in 2002, to form Springer Science+Business Media.
In 2006, Springer acquired Humana Press.
Springer acquired the open-access publisher BioMed Central in October 2008 for an undisclosed amount.
In 2009, Cinven and Candover sold Springer to two private equity firms, EQT AB and Government of Singapore Investment Corporation, confirmed in February 2010 after the competition authorities in the US and in Europe approved the transfer.
In 2011, Springer acquired Pharma Marketing and Publishing Services (MPS) from Wolters Kluwer.
In 2013, the London-based private equity firm BC Partners acquired a majority stake in Springer from EQT and GIC for $4.4 billion.
In January 2015, Holtzbrinck Publishing Group / Nature Publishing Group and Springer Science+Business Media announced a merger. in May 2015 they concluded the transaction and formed a new joint venture company, Springer Nature with Holtzbrinck in the majority 53% share and BC Partners retaining 47% interest in the company.
== Products ==
In 1996, Springer launched electronic book and journal content on its SpringerLink site.
SpringerImages was launched in 2008. In 2009, SpringerMaterials, a platform for accessing the Landolt-Börnstein database of research and information on materials and their properties, was launched.
AuthorMapper is a free online tool for visualizing scientific research that enables document discovery based on author locations and geographic maps, helping users explore patterns in scientific research, identify literature trends, discover collaborative relationships, and locate experts in several scientific/medical fields.
Springer Protocols contained a collection of laboratory protocols, recipes that provide step-by-step instructions for conducting experiments, which in 2018 was made available in SpringerLink instead.
Book publications include major reference works, textbooks, monographs and book series; more than 168,000 titles are available as e-books in 24 subject collections.
=== Open access ===
Springer is a member of the Open Access Scholarly Publishers Association. For some of its journals, Springer does not require its authors to transfer their copyrights, and allows them to decide whether their articles are published under an open-access license or in the traditional restricted licence model. While open-access publishing typically requires the author to pay a fee for copyright retention, this fee is sometimes covered by a third party. For example, a national institution in Poland allows authors to publish in open-access journals without incurring any personal cost but using public funds.
== Controversies ==
In 1938, Springer-Verlag was pressed to apply Nazi principles on the journal Zentralblatt MATH. Tullio Levi-Civita, who was Jewish, was forced out from the editorial board, and Otto Neugebauer resigned in protest along with most of the rest of the board.
In 2014, it was revealed that 16 papers in conference proceedings published by Springer had been computer-generated using SCIgen. Springer subsequently retracted all papers from these proceedings. IEEE had removed more than 100 fake papers from its conference proceedings.
In 2015, Springer retracted 64 papers from 10 of its journals it had published after a fraudulent peer review process was uncovered.
=== Manipulation of bibliometrics ===
According to Goodhart's law and concerned academics like the signatories of the San Francisco Declaration on Research Assessment, commercial academic publishers benefit from manipulation of bibliometrics and scientometrics like the journal impact factor, which is often used as a proxy of prestige and can influence revenues, including public subsidies in the form of subscriptions and free work from academics.
Seven Springer Nature journals, which exhibited unusual levels of self-citation, had their journal impact factor of 2019 suspended from Journal Citation Reports in 2020, a sanction which hit 34 journals in total.
== Selected imprints ==
== Selected publications ==
Cellular Oncology
Encyclopaedia of Mathematics
Ergebnisse der Mathematik und ihrer Grenzgebiete (book series)
Graduate Texts in Mathematics (book series)
Grothendieck's Séminaire de géométrie algébrique
The International Journal of Advanced Manufacturing Technology
Lecture Notes in Computer Science
Undergraduate Texts in Mathematics (book series)
Zentralblatt MATH
MRS Bulletin
== See also ==
Category:Springer Science+Business Media academic journals
List of publishers
Media concentration
== References ==
== External links ==
Official website
Mary H. Munroe (2004). "Springer Timeline". The Academic Publishing Industry: A Story of Merger and Acquisition. Archived from the original on 2014-10-20 – via Northern Illinois University. | Wikipedia/Springer_Science+Business_Media_Dordrecht |
The Biham–Middleton–Levine traffic model is a self-organizing cellular automaton traffic flow model. It consists of a number of cars represented by points on a lattice with a random starting position, where each car may be one of two types: those that only move downwards (shown as blue in this article), and those that only move towards the right (shown as red in this article). The two types of cars take turns to move. During each turn, all the cars for the corresponding type advance by one step if they are not blocked by another car. It may be considered the two-dimensional analogue of the simpler Rule 184 model. It is possibly the simplest system exhibiting phase transitions and self-organization.
== History ==
The Biham–Middleton–Levine traffic model was first formulated by Ofer Biham, A. Alan Middleton, and Dov Levine in 1992. Biham et al found that as the density of traffic increased, the steady-state flow of traffic suddenly went from smooth flow to a complete jam. In 2005, Raissa D'Souza found that for some traffic densities, there is an intermediate phase characterized by periodic arrangements of jams and smooth flow. In the same year, Angel, Holroyd and Martin were the first to rigorously prove that for densities close to one, the system will always jam. Later, in 2006, Tim Austin and Itai Benjamini found that for a square lattice of side N, the model will always self-organize to reach full speed if there are fewer than N/2 cars.
== Lattice space ==
The cars are typically placed on a square lattice that is topologically equivalent to a torus: that is, cars that move off the right edge would reappear on the left edge; and cars that move off the bottom edge would reappear on the top edge.
There has also been research in rectangular lattices instead of square ones. For rectangles with coprime dimensions, the intermediate states are self-organized bands of jams and free-flow with detailed geometric structure, that repeat periodically in time. In non-coprime rectangles, the intermediate states are typically disordered rather than periodic.
== Phase transitions ==
Despite the simplicity of the model, it has two highly distinguishable phases – the jammed phase, and the free-flowing phase. For low numbers of cars, the system will usually organize itself to achieve a smooth flow of traffic. In contrast, if there is a high number of cars, the system will become jammed to the extent that no single car will move. Typically, in a square lattice, the transition density is when there are around 32% as many cars as there are possible spaces in the lattice.
=== Intermediate phase ===
The intermediate phase occurs close to the transition density, combining features from both the jammed and free-flowing phases. There are principally two intermediate phases – disordered (which could be meta-stable) and periodic (which are provably stable). On rectangular lattices with coprime dimensions, only periodic orbits exist. In 2008 periodic intermediate phases were also observed in square lattices. Yet, on square lattices disordered intermediate phases are more frequently observed and tend to dominate densities close to the transition region.
== Rigorous analysis ==
Despite the simplicity of the model, rigorous analysis is very nontrivial. Nonetheless, there have been mathematical proofs regarding the Biham–Middleton–Levine traffic model. Proofs so far have been restricted to the extremes of traffic density. In 2005, Alexander Holroyd et al proved that for densities sufficiently close to one, the system will have no cars moving infinitely often. In 2006, Tim Austin and Itai Benjamini proved that the model will always reach the free-flowing phase if the number of cars is less than half the edge length for a square lattice.
== Non-orientable surfaces ==
The model is typically studied on the orientable torus, but it is possible to implement the lattice on a Klein bottle. When the red cars reach the right edge, they reappear on the left edge except flipped vertically; the ones at the bottom are now at the top, and vice versa. More formally, for every
y
∈
{
0
,
…
,
N
−
1
}
{\displaystyle y\in \lbrace 0,\ldots ,N-1\rbrace }
, a red car exiting the site
(
N
−
1
,
y
)
{\displaystyle (N-1,y)}
would enter the site
(
0
,
N
−
y
−
1
)
{\displaystyle (0,N-y-1)}
. It is also possible to implement it on the real projective plane. In addition to flipping the red cars, the same is done for the blue cars: for every
x
∈
{
0
,
…
,
N
−
1
}
{\displaystyle x\in \lbrace 0,\ldots ,N-1\rbrace }
, a blue car exiting the site
(
x
,
N
−
1
)
{\displaystyle (x,N-1)}
would enter the site
(
N
−
x
−
1
,
0
)
{\displaystyle (N-x-1,0)}
.
The behaviour of the system on the Klein bottle is much more similar to the one on the torus than the one on the real projective plane. For the Klein bottle setup, the mobility as a function of density starts to decrease slightly sooner than in the torus case, although the behaviour is similar for densities greater than the critical point. The mobility on the real projective plane decreases more gradually for densities from zero to the critical point. In the real projective plane, local jams may form at the corners of the lattice even though the rest of the lattice is free-flowing.
== Randomization ==
A randomized variant of the BML traffic model, called BML-R, was studied in 2010. Under periodic boundaries, instead of updating all cars of the same colour at once during each step, the randomized model performs
L
2
{\displaystyle L^{2}}
updates (where
L
{\displaystyle L}
is the side length of the presumably square lattice): each time, a random cell is selected and, if it contains a car, it is moved to the next cell if possible. In this case, the intermediate state observed in the usual BML traffic model does not exist, due to the non-deterministic nature of the randomized model; instead the transition from the jammed phase to the free flowing phase is sharp.
Under open boundary conditions, instead of having cars that drive off one edge wrapping around the other side, new cars are added on the left and top edges with probability
α
{\displaystyle \alpha }
and removed from the right and bottom edges
β
{\displaystyle \beta }
respectively. In this case, the number of cars in the system can change over time, and local jams can cause the lattice to appear very different from the usual model, such as having coexistence of jams and free-flowing areas; containing large empty spaces; or containing mostly cars of one type.
== References ==
== External links ==
CUDA implementation by Daniel Lu
WebGL implementation by Jason Davies
JavaScript implementation by Maciej Baron | Wikipedia/BML_traffic_model |
In cryptography, encryption (more specifically, encoding) is the process of transforming information in a way that, ideally, only authorized parties can decode. This process converts the original representation of the information, known as plaintext, into an alternative form known as ciphertext. Despite its goal, encryption does not itself prevent interference but denies the intelligible content to a would-be interceptor.
For technical reasons, an encryption scheme usually uses a pseudo-random encryption key generated by an algorithm. It is possible to decrypt the message without possessing the key but, for a well-designed encryption scheme, considerable computational resources and skills are required. An authorized recipient can easily decrypt the message with the key provided by the originator to recipients but not to unauthorized users.
Historically, various forms of encryption have been used to aid in cryptography. Early encryption techniques were often used in military messaging. Since then, new techniques have emerged and become commonplace in all areas of modern computing. Modern encryption schemes use the concepts of public-key and symmetric-key. Modern encryption techniques ensure security because modern computers are inefficient at cracking the encryption.
== History ==
=== Ancient ===
One of the earliest forms of encryption is symbol replacement, which was first found in the tomb of Khnumhotep II, who lived in 1900 BC Egypt. Symbol replacement encryption is “non-standard,” which means that the symbols require a cipher or key to understand. This type of early encryption was used throughout Ancient Greece and Rome for military purposes. One of the most famous military encryption developments was the Caesar cipher, in which a plaintext letter is shifted a fixed number of positions along the alphabet to get the encoded letter. A message encoded with this type of encryption could be decoded with a fixed number on the Caesar cipher.
Around 800 AD, Arab mathematician al-Kindi developed the technique of frequency analysis – which was an attempt to crack ciphers systematically, including the Caesar cipher. This technique looked at the frequency of letters in the encrypted message to determine the appropriate shift: for example, the most common letter in English text is E and is therefore likely to be represented by the letter that appears most commonly in the ciphertext. This technique was rendered ineffective by the polyalphabetic cipher, described by al-Qalqashandi (1355–1418) and Leon Battista Alberti (in 1465), which varied the substitution alphabet as encryption proceeded in order to confound such analysis.
=== 19th–20th century ===
Around 1790, Thomas Jefferson theorized a cipher to encode and decode messages to provide a more secure way of military correspondence. The cipher, known today as the Wheel Cipher or the Jefferson Disk, although never actually built, was theorized as a spool that could jumble an English message up to 36 characters. The message could be decrypted by plugging in the jumbled message to a receiver with an identical cipher.
A similar device to the Jefferson Disk, the M-94, was developed in 1917 independently by US Army Major Joseph Mauborne. This device was used in U.S. military communications until 1942.
In World War II, the Axis powers used a more advanced version of the M-94 called the Enigma Machine. The Enigma Machine was more complex because unlike the Jefferson Wheel and the M-94, each day the jumble of letters switched to a completely new combination. Each day's combination was only known by the Axis, so many thought the only way to break the code would be to try over 17,000 combinations within 24 hours. The Allies used computing power to severely limit the number of reasonable combinations they needed to check every day, leading to the breaking of the Enigma Machine.
=== Modern ===
Today, encryption is used in the transfer of communication over the Internet for security and commerce. As computing power continues to increase, computer encryption is constantly evolving to prevent eavesdropping attacks. One of the first "modern" cipher suites, DES, used a 56-bit key with 72,057,594,037,927,936 possibilities; it was cracked in 1999 by EFF's brute-force DES cracker, which required 22 hours and 15 minutes to do so. Modern encryption standards often use stronger key sizes, such as AES (256-bit mode), TwoFish, ChaCha20-Poly1305, Serpent (configurable up to 512-bit). Cipher suites that use a 128-bit or higher key, like AES, will not be able to be brute-forced because the total amount of keys is 3.4028237e+38 possibilities. The most likely option for cracking ciphers with high key size is to find vulnerabilities in the cipher itself, like inherent biases and backdoors or by exploiting physical side effects through Side-channel attacks. For example, RC4, a stream cipher, was cracked due to inherent biases and vulnerabilities in the cipher.
== Encryption in cryptography ==
In the context of cryptography, encryption serves as a mechanism to ensure confidentiality. Since data may be visible on the Internet, sensitive information such as passwords and personal communication may be exposed to potential interceptors. The process of encrypting and decrypting messages involves keys. The two main types of keys in cryptographic systems are symmetric-key and public-key (also known as asymmetric-key).
Many complex cryptographic algorithms often use simple modular arithmetic in their implementations.
=== Types ===
In symmetric-key schemes, the encryption and decryption keys are the same. Communicating parties must have the same key in order to achieve secure communication. The German Enigma Machine used a new symmetric-key each day for encoding and decoding messages.
In public-key cryptography schemes, the encryption key is published for anyone to use and encrypt messages. However, only the receiving party has access to the decryption key that enables messages to be read. Public-key encryption was first described in a secret document in 1973; beforehand, all encryption schemes were symmetric-key (also called private-key).: 478 Although published subsequently, the work of Diffie and Hellman was published in a journal with a large readership, and the value of the methodology was explicitly described. The method became known as the Diffie-Hellman key exchange.
RSA (Rivest–Shamir–Adleman) is another notable public-key cryptosystem. Created in 1978, it is still used today for applications involving digital signatures. Using number theory, the RSA algorithm selects two prime numbers, which help generate both the encryption and decryption keys.
A publicly available public-key encryption application called Pretty Good Privacy (PGP) was written in 1991 by Phil Zimmermann, and distributed free of charge with source code. PGP was purchased by Symantec in 2010 and is regularly updated.
== Uses ==
Encryption has long been used by militaries and governments to facilitate secret communication. It is now commonly used in protecting information within many kinds of civilian systems. For example, the Computer Security Institute reported that in 2007, 71% of companies surveyed used encryption for some of their data in transit, and 53% used encryption for some of their data in storage. Encryption can be used to protect data "at rest", such as information stored on computers and storage devices (e.g. USB flash drives). In recent years, there have been numerous reports of confidential data, such as customers' personal records, being exposed through loss or theft of laptops or backup drives; encrypting such files at rest helps protect them if physical security measures fail. Digital rights management systems, which prevent unauthorized use or reproduction of copyrighted material and protect software against reverse engineering (see also copy protection), is another somewhat different example of using encryption on data at rest.
Encryption is also used to protect data in transit, for example data being transferred via networks (e.g. the Internet, e-commerce), mobile telephones, wireless microphones, wireless intercom systems, Bluetooth devices and bank automatic teller machines. There have been numerous reports of data in transit being intercepted in recent years. Data should also be encrypted when transmitted across networks in order to protect against eavesdropping of network traffic by unauthorized users.
=== Data erasure ===
Conventional methods for permanently deleting data from a storage device involve overwriting the device's whole content with zeros, ones, or other patterns – a process which can take a significant amount of time, depending on the capacity and the type of storage medium. Cryptography offers a way of making the erasure almost instantaneous. This method is called crypto-shredding. An example implementation of this method can be found on iOS devices, where the cryptographic key is kept in a dedicated 'effaceable storage'. Because the key is stored on the same device, this setup on its own does not offer full privacy or security protection if an unauthorized person gains physical access to the device.
== Limitations ==
Encryption is used in the 21st century to protect digital data and information systems. As computing power increased over the years, encryption technology has only become more advanced and secure. However, this advancement in technology has also exposed a potential limitation of today's encryption methods.
The length of the encryption key is an indicator of the strength of the encryption method. For example, the original encryption key, DES (Data Encryption Standard), was 56 bits, meaning it had 2^56 combination possibilities. With today's computing power, a 56-bit key is no longer secure, being vulnerable to brute force attacks.
Quantum computing uses properties of quantum mechanics in order to process large amounts of data simultaneously. Quantum computing has been found to achieve computing speeds thousands of times faster than today's supercomputers. This computing power presents a challenge to today's encryption technology. For example, RSA encryption uses the multiplication of very large prime numbers to create a semiprime number for its public key. Decoding this key without its private key requires this semiprime number to be factored, which can take a very long time to do with modern computers. It would take a supercomputer anywhere between weeks to months to factor in this key. However, quantum computing can use quantum algorithms to factor this semiprime number in the same amount of time it takes for normal computers to generate it. This would make all data protected by current public-key encryption vulnerable to quantum computing attacks. Other encryption techniques like elliptic curve cryptography and symmetric key encryption are also vulnerable to quantum computing.
While quantum computing could be a threat to encryption security in the future, quantum computing as it currently stands is still very limited. Quantum computing currently is not commercially available, cannot handle large amounts of code, and only exists as computational devices, not computers. Furthermore, quantum computing advancements will be able to be used in favor of encryption as well. The National Security Agency (NSA) is currently preparing post-quantum encryption standards for the future. Quantum encryption promises a level of security that will be able to counter the threat of quantum computing.
== Attacks and countermeasures ==
Encryption is an important tool but is not sufficient alone to ensure the security or privacy of sensitive information throughout its lifetime. Most applications of encryption protect information only at rest or in transit, leaving sensitive data in clear text and potentially vulnerable to improper disclosure during processing, such as by a cloud service for example. Homomorphic encryption and secure multi-party computation are emerging techniques to compute encrypted data; these techniques are general and Turing complete but incur high computational and/or communication costs.
In response to encryption of data at rest, cyber-adversaries have developed new types of attacks. These more recent threats to encryption of data at rest include cryptographic attacks, stolen ciphertext attacks, attacks on encryption keys, insider attacks, data corruption or integrity attacks, data destruction attacks, and ransomware attacks. Data fragmentation and active defense data protection technologies attempt to counter some of these attacks, by distributing, moving, or mutating ciphertext so it is more difficult to identify, steal, corrupt, or destroy.
== The debate around encryption ==
The question of balancing the need for national security with the right to privacy has been debated for years, since encryption has become critical in today's digital society. The modern encryption debate started around the '90s when US government tried to ban cryptography because, according to them, it would threaten national security. The debate is polarized around two opposing views. Those who see strong encryption as a problem making it easier for criminals to hide their illegal acts online and others who argue that encryption keep digital communications safe. The debate heated up in 2014, when Big Tech like Apple and Google set encryption by default in their devices. This was the start of a series of controversies that puts governments, companies and internet users at stake.
=== Integrity protection of Ciphertexts ===
Encryption, by itself, can protect the confidentiality of messages, but other techniques are still needed to protect the integrity and authenticity of a message; for example, verification of a message authentication code (MAC) or a digital signature usually done by a hashing algorithm or a PGP signature. Authenticated encryption algorithms are designed to provide both encryption and integrity protection together. Standards for cryptographic software and hardware to perform encryption are widely available, but successfully using encryption to ensure security may be a challenging problem. A single error in system design or execution can allow successful attacks. Sometimes an adversary can obtain unencrypted information without directly undoing the encryption. See for example traffic analysis, TEMPEST, or Trojan horse.
Integrity protection mechanisms such as MACs and digital signatures must be applied to the ciphertext when it is first created, typically on the same device used to compose the message, to protect a message end-to-end along its full transmission path; otherwise, any node between the sender and the encryption agent could potentially tamper with it. Encrypting at the time of creation is only secure if the encryption device itself has correct keys and has not been tampered with. If an endpoint device has been configured to trust a root certificate that an attacker controls, for example, then the attacker can both inspect and tamper with encrypted data by performing a man-in-the-middle attack anywhere along the message's path. The common practice of TLS interception by network operators represents a controlled and institutionally sanctioned form of such an attack, but countries have also attempted to employ such attacks as a form of control and censorship.
=== Ciphertext length and padding ===
Even when encryption correctly hides a message's content and it cannot be tampered with at rest or in transit, a message's length is a form of metadata that can still leak sensitive information about the message. For example, the well-known CRIME and BREACH attacks against HTTPS were side-channel attacks that relied on information leakage via the length of encrypted content. Traffic analysis is a broad class of techniques that often employs message lengths to infer sensitive implementation about traffic flows by aggregating information about a large number of messages.
Padding a message's payload before encrypting it can help obscure the cleartext's true length, at the cost of increasing the ciphertext's size and introducing or increasing bandwidth overhead. Messages may be padded randomly or deterministically, with each approach having different tradeoffs. Encrypting and padding messages to form padded uniform random blobs or PURBs is a practice guaranteeing that the cipher text leaks no metadata about its cleartext's content, and leaks asymptotically minimal
O
(
log
log
M
)
{\displaystyle O(\log \log M)}
information via its length.
== See also ==
== References ==
== Further reading ==
Fouché Gaines, Helen (1939), Cryptanalysis: A Study of Ciphers and Their Solution, New York: Dover Publications Inc, ISBN 978-0486200972 {{citation}}: ISBN / Date incompatibility (help)
Kahn, David (1967), The Codebreakers - The Story of Secret Writing (ISBN 0-684-83130-9)
Preneel, Bart (2000), "Advances in Cryptology – EUROCRYPT 2000", Springer Berlin Heidelberg, ISBN 978-3-540-67517-4
Sinkov, Abraham (1966): Elementary Cryptanalysis: A Mathematical Approach, Mathematical Association of America. ISBN 0-88385-622-0
Tenzer, Theo (2021): SUPER SECRETO – The Third Epoch of Cryptography: Multiple, exponential, quantum-secure and above all, simple and practical Encryption for Everyone, Norderstedt, ISBN 978-3-755-76117-4.
Lindell, Yehuda; Katz, Jonathan (2014), Introduction to modern cryptography, Hall/CRC, ISBN 978-1466570269
Ermoshina, Ksenia; Musiani, Francesca (2022), Concealing for Freedom: The Making of Encryption, Secure Messaging and Digital Liberties (Foreword by Laura DeNardis)(open access) (PDF), Manchester, UK: matteringpress.org, ISBN 978-1-912729-22-7, archived from the original (PDF) on 2022-06-02
== External links ==
The dictionary definition of encryption at Wiktionary
Media related to Cryptographic algorithms at Wikimedia Commons | Wikipedia/Encryption_algorithms |
The Kuramoto model (or Kuramoto–Daido model), first proposed by Yoshiki Kuramoto (蔵本 由紀, Kuramoto Yoshiki), is a mathematical model used in describing synchronization. More specifically, it is a model for the behavior of a large set of coupled oscillators. Its formulation was motivated by the behavior of systems of chemical and biological oscillators, and it has found widespread applications in areas such as neuroscience and oscillating flame dynamics. Kuramoto was quite surprised when the behavior of some physical systems, namely coupled arrays of Josephson junctions, followed his model.
The model makes several assumptions, including that there is weak coupling, that the oscillators are identical or nearly identical, and that interactions depend sinusoidally on the phase difference between each pair of objects.
== Definition ==
In the most popular version of the Kuramoto model, each of the oscillators is considered to have its own intrinsic natural frequency
ω
i
{\displaystyle \omega _{i}}
, and each is coupled equally to all other oscillators. Surprisingly, this fully nonlinear model can be solved exactly in the limit of infinite oscillators, N→ ∞; alternatively, using self-consistency arguments one may obtain steady-state solutions of the order parameter.
The most popular form of the model has the following governing equations:
d
θ
i
d
t
=
ω
i
+
1
N
∑
j
=
1
N
K
i
j
sin
(
θ
j
−
θ
i
)
,
i
=
1
…
N
{\displaystyle {\frac {d\theta _{i}}{dt}}=\omega _{i}+{\frac {1}{N}}\sum _{j=1}^{N}K_{ij}\sin(\theta _{j}-\theta _{i}),\qquad i=1\ldots N}
,
where the system is composed of N limit-cycle oscillators, with phases
θ
i
{\displaystyle \theta _{i}}
and coupling constant K.
Noise can be added to the system. In that case, the original equation is altered to
d
θ
i
d
t
=
ω
i
+
ζ
i
+
K
N
∑
j
=
1
N
sin
(
θ
j
−
θ
i
)
{\displaystyle {\frac {d\theta _{i}}{dt}}=\omega _{i}+\zeta _{i}+{\dfrac {K}{N}}\sum _{j=1}^{N}\sin(\theta _{j}-\theta _{i})}
,
where
ζ
i
{\displaystyle \zeta _{i}}
is the fluctuation and a function of time. If the noise is considered to be white noise, then
⟨
ζ
i
(
t
)
⟩
=
0
{\displaystyle \langle \zeta _{i}(t)\rangle =0}
,
⟨
ζ
i
(
t
)
ζ
j
(
t
′
)
⟩
=
2
D
δ
i
j
δ
(
t
−
t
′
)
{\displaystyle \langle \zeta _{i}(t)\zeta _{j}(t')\rangle =2D\delta _{ij}\delta (t-t')}
with
D
{\displaystyle D}
denoting the strength of noise.
== Transformation ==
The transformation that allows this model to be solved exactly (at least in the N → ∞ limit) is as follows:
Define the "order" parameters r and ψ as
r
e
i
ψ
=
1
N
∑
j
=
1
N
e
i
θ
j
{\displaystyle re^{i\psi }={\frac {1}{N}}\sum _{j=1}^{N}e^{i\theta _{j}}}
.
Here r represents the phase-coherence of the population of oscillators and ψ indicates the average phase. Substituting in the equation gives
d
θ
i
d
t
=
ω
i
+
K
r
sin
(
ψ
−
θ
i
)
{\displaystyle {\frac {d\theta _{i}}{dt}}=\omega _{i}+Kr\sin(\psi -\theta _{i})}
.
Thus the oscillators' equations are no longer explicitly coupled; instead the order parameters govern the behavior. A further transformation is usually done, to a rotating frame in which the statistical average of phases over all oscillators is zero (i.e.
ψ
=
0
{\displaystyle \psi =0}
). Finally, the governing equation becomes
d
θ
i
d
t
=
ω
i
−
K
r
sin
(
θ
i
)
{\displaystyle {\frac {d\theta _{i}}{dt}}=\omega _{i}-Kr\sin(\theta _{i})}
.
== Large N limit ==
Now consider the case as N tends to infinity. Take the distribution of intrinsic natural frequencies as g(ω) (assumed normalized). Then assume that the density of oscillators at a given phase θ, with given natural frequency ω, at time t is
ρ
(
θ
,
ω
,
t
)
{\displaystyle \rho (\theta ,\omega ,t)}
. Normalization requires that
∫
−
π
π
ρ
(
θ
,
ω
,
t
)
d
θ
=
1.
{\displaystyle \int _{-\pi }^{\pi }\rho (\theta ,\omega ,t)\,d\theta =1.}
The continuity equation for oscillator density will be
∂
ρ
∂
t
+
∂
∂
θ
[
ρ
v
]
=
0
,
{\displaystyle {\frac {\partial \rho }{\partial t}}+{\frac {\partial }{\partial \theta }}[\rho v]=0,}
where v is the drift velocity of the oscillators given by taking the infinite-N limit in the transformed governing equation, such that
∂
ρ
∂
t
+
∂
∂
θ
[
ρ
ω
+
ρ
K
r
sin
(
ψ
−
θ
)
]
=
0.
{\displaystyle {\frac {\partial \rho }{\partial t}}+{\frac {\partial }{\partial \theta }}[\rho \omega +\rho Kr\sin(\psi -\theta )]=0.}
Finally, the definition of the order parameters must be rewritten for the continuum (infinite N) limit.
θ
i
{\displaystyle \theta _{i}}
must be replaced by its ensemble average (over all
ω
{\displaystyle \omega }
) and the sum must be replaced by an integral, to give
r
e
i
ψ
=
∫
−
π
π
e
i
θ
∫
−
∞
∞
ρ
(
θ
,
ω
,
t
)
g
(
ω
)
d
ω
d
θ
.
{\displaystyle re^{i\psi }=\int _{-\pi }^{\pi }e^{i\theta }\int _{-\infty }^{\infty }\rho (\theta ,\omega ,t)g(\omega )\,d\omega \,d\theta .}
== Solutions for the large N limit ==
The incoherent state with all oscillators drifting randomly corresponds to the solution
ρ
=
1
/
(
2
π
)
{\displaystyle \rho =1/(2\pi )}
. In that case
r
=
0
{\displaystyle r=0}
, and there is no coherence among the oscillators. They are uniformly distributed across all possible phases, and the population is in a statistical steady-state (although individual oscillators continue to change phase in accordance with their intrinsic ω).
When coupling K is sufficiently strong, a fully synchronized solution is possible. In the fully synchronized state, all the oscillators share a common frequency, although their phases can be different.
A solution for the case of partial synchronization yields a state in which only some oscillators (those near the ensemble's mean natural frequency) synchronize; other oscillators drift incoherently. Mathematically, the state has
ρ
=
δ
(
θ
−
ψ
−
arcsin
(
ω
K
r
)
)
{\displaystyle \rho =\delta \left(\theta -\psi -\arcsin \left({\frac {\omega }{Kr}}\right)\right)}
for locked oscillators, and
ρ
=
n
o
r
m
a
l
i
z
a
t
i
o
n
c
o
n
s
t
a
n
t
(
ω
−
K
r
sin
(
θ
−
ψ
)
)
{\displaystyle \rho ={\frac {\rm {normalization\;constant}}{(\omega -Kr\sin(\theta -\psi ))}}}
for drifting oscillators. The cutoff occurs when
|
ω
|
<
K
r
{\displaystyle |\omega |<Kr}
.
When
g
{\displaystyle g}
is unimodal and symmetric, then a stable state solution for the system is
r
=
r
K
∫
−
π
/
2
π
/
2
cos
2
θ
g
(
K
r
sin
θ
)
d
θ
{\displaystyle r=rK\int _{-\pi /2}^{\pi /2}\cos ^{2}\theta g(Kr\sin \theta )d\theta }
As coupling increases, there is a critical value
K
c
=
2
/
π
g
(
0
)
{\displaystyle K_{c}=2/\pi g(0)}
such that when
K
<
K
c
{\displaystyle K<K_{c}}
, the long-term average of
r
=
0
{\displaystyle r=0}
, but when
K
=
K
c
(
1
+
μ
)
{\displaystyle K=K_{c}(1+\mu )}
, where
μ
>
0
{\displaystyle \mu >0}
is small, then
r
≈
16
π
K
c
3
μ
−
g
′
′
(
0
)
{\displaystyle r\approx {\sqrt {\frac {16}{\pi K_{\mathrm {c} }^{3}}}}{\sqrt {\frac {\mu }{-g^{\prime \prime }(0)}}}}
.
== Small N cases ==
When N is small, the solutions given above breaks down, as the continuum approximation cannot be used.
The N=2 case is trivial. In the rotating frame
ω
1
=
−
ω
2
{\displaystyle \omega _{1}=-\omega _{2}}
, and so the system is described exactly by the angle between the two oscillators:
Δ
θ
=
θ
1
−
θ
2
{\displaystyle \Delta \theta =\theta _{1}-\theta _{2}}
. When
K
<
K
c
=
2
|
ω
1
|
{\displaystyle K<K_{c}=2|\omega _{1}|}
, the angle cycles around the circle (that is, the fast oscillator keeps lapping around the slow oscillator). When
K
>
K
c
{\displaystyle K>K_{c}}
, the angle falls into a stable attractor (that is, the two oscillators lock in phase). Similarly, the state space of the N=3 case is a 2-dimensional torus, and so the system evolves as a flow on the 2-torus, which cannot be chaotic.
Chaos first occurs when N=4. For some settings of
ω
1
,
ω
2
,
ω
3
,
K
{\displaystyle \omega _{1},\omega _{2},\omega _{3},K}
, the system has a strange attractor.
A related case for N=2 is the circle map or phase-locked loop. In this model, one of the oscillators is driven at a fixed frequency (and thus no longer free to vary), while the other, weakly coupled to the driver, is free to spin arbitrarily.
== Connection to Hamiltonian systems ==
The dissipative Kuramoto model is contained in certain conservative Hamiltonian systems with Hamiltonian of the form
H
(
q
1
,
…
,
q
N
,
p
1
,
…
,
p
N
)
=
∑
i
=
1
N
ω
i
2
(
q
i
2
+
p
i
2
)
+
K
4
N
∑
i
,
j
=
1
N
(
q
i
p
j
−
q
j
p
i
)
(
q
j
2
+
p
j
2
−
q
i
2
−
p
i
2
)
{\displaystyle {\mathcal {H}}(q_{1},\ldots ,q_{N},p_{1},\ldots ,p_{N})=\sum _{i=1}^{N}{\frac {\omega _{i}}{2}}(q_{i}^{2}+p_{i}^{2})+{\frac {K}{4N}}\sum _{i,j=1}^{N}(q_{i}p_{j}-q_{j}p_{i})(q_{j}^{2}+p_{j}^{2}-q_{i}^{2}-p_{i}^{2})}
After a canonical transformation to action-angle variables with actions
I
i
=
(
q
i
2
+
p
i
2
)
/
2
{\displaystyle I_{i}=\left(q_{i}^{2}+p_{i}^{2}\right)/2}
and angles (phases)
ϕ
i
=
a
r
c
t
a
n
(
q
i
/
p
i
)
{\displaystyle \phi _{i}=\mathrm {arctan} \left(q_{i}/p_{i}\right)}
, exact Kuramoto dynamics emerges on invariant manifolds of constant
I
i
≡
I
{\displaystyle I_{i}\equiv I}
. With the transformed Hamiltonian
H
′
(
I
1
,
…
I
N
,
ϕ
1
…
,
ϕ
N
)
=
∑
i
=
1
N
ω
i
I
i
−
K
N
∑
i
=
1
N
∑
j
=
1
N
I
j
I
i
(
I
j
−
I
i
)
sin
(
ϕ
j
−
ϕ
i
)
,
{\displaystyle {\mathcal {H'}}(I_{1},\ldots I_{N},\phi _{1}\ldots ,\phi _{N})=\sum _{i=1}^{N}\omega _{i}I_{i}-{\frac {K}{N}}\sum _{i=1}^{N}\sum _{j=1}^{N}{\sqrt {I_{j}I_{i}}}(I_{j}-I_{i})\sin(\phi _{j}-\phi _{i}),}
Hamilton's equation of motion become
d
I
i
d
t
=
−
∂
H
′
∂
ϕ
i
=
−
2
K
N
∑
k
=
1
N
I
k
I
i
(
I
k
−
I
i
)
cos
(
ϕ
k
−
ϕ
i
)
{\displaystyle {\frac {dI_{i}}{dt}}=-{\frac {\partial {\mathcal {H}}'}{\partial \phi _{i}}}=-{\frac {2K}{N}}\sum _{k=1}^{N}{\sqrt {I_{k}I_{i}}}(I_{k}-I_{i})\cos(\phi _{k}-\phi _{i})}
and
d
ϕ
i
d
t
=
∂
H
′
∂
I
i
=
ω
i
+
K
N
∑
k
=
1
N
[
2
I
i
I
k
sin
(
ϕ
k
−
ϕ
i
)
+
I
k
/
I
i
(
I
k
−
I
i
)
sin
(
ϕ
k
−
ϕ
i
)
]
.
{\displaystyle {\frac {d\phi _{i}}{dt}}={\frac {\partial {\mathcal {H}}'}{\partial I_{i}}}=\omega _{i}+{\frac {K}{N}}\sum _{k=1}^{N}\left[2{\sqrt {I_{i}I_{k}}}\sin(\phi _{k}-\phi _{i})\right.\left.+{\sqrt {I_{k}/I_{i}}}(I_{k}-I_{i})\sin(\phi _{k}-\phi _{i})\right].}
So the manifold with
I
j
=
I
{\displaystyle I_{j}=I}
is invariant because
d
I
i
d
t
=
0
{\displaystyle {\frac {dI_{i}}{dt}}=0}
and the phase dynamics
d
ϕ
i
d
t
{\displaystyle {\frac {d\phi _{i}}{dt}}}
becomes the dynamics of the Kuramoto model (with the same coupling constants for
I
=
1
/
2
{\displaystyle I=1/2}
). The class of Hamiltonian systems characterizes certain quantum-classical systems including Bose–Einstein condensates.
== Variations of the models ==
There are a number of types of variations that can be applied to the original model presented above. Some models change the topological structure, others allow for heterogeneous weights, and other changes are more related to models that are inspired by the Kuramoto model but do not have the same functional form.
=== Variations of network topology ===
Beside the original model, which has an all-to-all topology, a sufficiently dense complex network-like topology is amenable to the mean-field treatment used in the solution of the original model (see Transformation and Large N limit above for more info). Network topologies such as rings and coupled populations support chimera states. One also may ask for the behavior of models in which there are intrinsically local, like one-dimensional topologies which the chain and the ring are prototypical examples. In such topologies, in which the coupling is not scalable according to 1/N, it is not possible to apply the canonical mean-field approach, so one must rely upon case-by-case analysis, making use of symmetries whenever it is possible, which may give basis for abstraction of general principles of solutions.
Uniform synchrony, waves and spirals can readily be observed in two-dimensional Kuramoto networks with diffusive local coupling. The stability of waves in these models can be determined analytically using the methods of Turing stability analysis. Uniform synchrony tends to be stable when the local coupling is everywhere positive whereas waves arise when the long-range connections are negative (inhibitory surround coupling). Waves and synchrony are connected by a topologically distinct branch of solutions known as ripple. These are low-amplitude spatially-periodic deviations that emerge from the uniform state (or the wave state) via a Hopf bifurcation. The existence of ripple solutions was predicted (but not observed) by Wiley, Strogatz and Girvan, who called them multi-twisted q-states.
The topology on which the Kuramoto model is studied can be made adaptive by use of fitness model showing enhancement of synchronization and percolation in a self-organised way.
A graph with the minimal degree at least
d
m
i
n
≥
0.5
n
{\displaystyle d_{min}\geq 0.5\ n}
will be connected nevertheless for a graph to synchronize a little more it is required for such case it is known that there is critical connectivity threshold
μ
c
{\displaystyle \mu _{c}}
such that any graph on
n
{\displaystyle n}
nodes with minimum degree
d
m
i
n
≥
μ
c
(
n
−
1
)
{\displaystyle d_{min}\geq \mu _{c}(n-1)}
must globally synchronise.for
n
{\displaystyle n}
large enough. The minimum maximum are known to lie between
0.6875
≤
μ
c
≤
0.75
{\displaystyle 0.6875\leq \mu _{c}\leq 0.75}
.
Similarly it is known that Erdős-Rényi graphs with edge probability precisely
p
=
(
1
+
ϵ
)
ln
(
n
)
/
n
{\displaystyle p=(1+\epsilon )\ln(n)/n}
as
n
{\displaystyle n}
goes to infinity will be connected and it has been conjectured that this value is too the number at which these random graphs undergo synchronization which a 2022 preprint claims to have proved.
=== Variations of network topology and network weights: from vehicle coordination to brain synchronization ===
Some works in the control community have focused on the Kuramoto model on networks and with heterogeneous weights (i.e. the interconnection strength between any two oscillators can be arbitrary). The dynamics of this model reads as follows:
d
θ
i
d
t
=
ω
i
+
∑
j
=
1
N
a
i
j
sin
(
θ
j
−
θ
i
)
,
i
=
1
…
N
{\displaystyle {\frac {d\theta _{i}}{dt}}=\omega _{i}+\sum _{j=1}^{N}a_{ij}\sin(\theta _{j}-\theta _{i}),\qquad i=1\ldots N}
where
a
i
j
{\displaystyle a_{ij}}
is a nonzero positive real number if oscillator
j
{\displaystyle j}
is connected to oscillator
i
{\displaystyle i}
. Such model allows for a more realistic study of, e.g., power grids, flocking, schooling, and vehicle coordination. In the work from Dörfler and colleagues, several theorems provide rigorous conditions for phase and frequency synchronization of this model. Further studies, motivated by experimental observations in neuroscience, focus on deriving analytical conditions for cluster synchronization of heterogeneous Kuramoto oscillators on arbitrary network topologies. Since the Kuramoto model seems to play a key role in assessing synchronization phenomena in the brain, theoretical conditions that support empirical findings may pave the way for a deeper understanding of neuronal synchronization phenomena.
=== Variations of the phase interaction function ===
Kuramoto approximated the phase interaction between any two oscillators by its first Fourier component, namely
Γ
(
ϕ
)
=
sin
(
ϕ
)
{\displaystyle \Gamma (\phi )=\sin(\phi )}
, where
ϕ
=
θ
j
−
θ
i
{\displaystyle \phi =\theta _{j}-\theta _{i}}
. Better approximations can be obtained by including higher-order Fourier components,
Γ
(
ϕ
)
=
sin
(
ϕ
)
+
a
1
sin
(
2
ϕ
+
b
1
)
+
.
.
.
+
a
n
sin
(
2
n
ϕ
+
b
n
)
{\displaystyle \Gamma (\phi )=\sin(\phi )+a_{1}\sin(2\phi +b_{1})+...+a_{n}\sin(2n\phi +b_{n})}
,
where parameters
a
i
{\displaystyle a_{i}}
and
b
i
{\displaystyle b_{i}}
must be estimated. For example, synchronization among a network of weakly-coupled Hodgkin–Huxley neurons can be replicated using coupled oscillators that retain the first four Fourier components of the interaction function. The introduction of higher-order phase interaction terms can also induce interesting dynamical phenomena such as partially synchronized states, heteroclinic cycles, and chaotic dynamics.
== Availability ==
pyclustering library includes a Python and C++ implementation of the Kuramoto model and its modifications. Also the library consists of oscillatory networks (for cluster analysis, pattern recognition, graph coloring, image segmentation) that are based on the Kuramoto model and phase oscillator.
== See also ==
Circle map
Master stability function
Oscillatory neural network
Phase-locked loop
Swarmalators
== References == | Wikipedia/Kuramoto_model |
In physics, the Landau–Hopf theory of turbulence, named for Lev Landau and Eberhard Hopf, was until the mid-1970s, the accepted theory of how a fluid flow becomes turbulent. It states that as a fluid flows faster, it develops more Fourier modes. At first, a few modes dominate, but under stronger conditions, it forces the modes to become power-law distributed as explained in Kolmogorov's theory of turbulence.
== References ==
Landau, L. D. (1944). "К проблеме турбулентности" [On the problem of turbulence]. Doklady Akademii Nauk SSSR. 44: 339–342.
Hopf, E. (1948). "A mathematical example displaying the features of turbulence". Communications on Pure and Applied Mathematics. 1 (4): 303–322. doi:10.1002/cpa.3160010401. | Wikipedia/Landau-Hopf_theory_of_turbulence |
The New York Academy of Sciences (NYAS), originally founded as the Lyceum of Natural History in January 1817, is a nonprofit professional society based in New York City, with more than 20,000 members from 100 countries. It is the fourth-oldest scientific society in the United States.
The academy hosts programs and publishes scientific content across various disciplines, including life sciences, physical sciences, and social sciences. Additionally, the academy addresses critical cross-disciplinary topics such as nutrition, artificial intelligence, space exploration, and sustainability. Through these initiatives, the NYAS facilitates the exchange of scientific information among its members, the broader scientific community, the media, and the public.
The academy provides resources and support to researchers, from emerging scientists to seasoned professionals. In 2020, Nicholas Dirks was appointed as the president and CEO of the academy. Peter Salovey, Former President of Yale University, currently serves as the chair of the board of governors, guiding the academy's mission and strategic direction.
== History ==
Founded on January 29, 1817, the New York Academy of Sciences was originally called the Lyceum of Natural History. Attended by the academy's founder and first president, Samuel L. Mitchill, the first meeting of the Lyceum took place at the College of Physicians and Surgeons, located on Barclay Street near Broadway in lower Manhattan. Within a few months of the first meeting, the Lyceum moved to the New York Institution (located on the northwest corner of Broadway and Chambers Street) and began its first activities—hosting lectures, collecting natural history specimens, and establishing a library. In 1823, the Lyceum began publishing its own scientific journal, Annals of the Lyceum of Natural History of New York, which, in 1876, was renamed Annals of the New York Academy of Sciences. By 1826 the Lyceum owned "the richest collection of reptiles and fish in the country." A fire in 1866 destroyed the collection. Following the fire, the academy turned its focus away from collecting and natural history to the ever-specializing domains of scientific research and inquiry, community outreach, and involvement in the scientific endeavors of the main scientific organizations in New York City. This included the dissemination of scientific information at all levels—from a curious public to specialized science societies, colleges, and universities.
From the outset, the New York Academy of Sciences' membership was unusual among scientific societies in the 19th century because its democratic structure allowed all to join, from laypeople to professional scientists, clinicians, and engineers. The membership has always included a mix of scientists, business people, academics, government workers, and members of the general public. Prominent members have included United States Presidents (Thomas Jefferson and James Monroe), as well as many notable scientists and scholars, including Asa Gray (who served as the superintendent of the academy starting in 1836), John James Audubon, Alexander Graham Bell, Thomas Edison, Louis Pasteur, Charles Darwin, Nikola Tesla, Margaret Mead (who served for a time as the vice president of the academy), Rosalyn Sussman Yalow, Elizabeth Blackburn, and Jennifer Doudna. Prior to 1877, the academy only admitted men, but on November 5, 1877, it elected Erminnie A. Smith the first female member. June Bacon-Bercey was the first African-American, and female African-American, member of the Academy. Members, Honorary Members, Corresponding Members, and Fellows have included many renowned scientists—including dozens of Nobel Prize laureates over the years.
Early Academy members played prominent roles in the establishment of New York University in 1831 and the American Museum of Natural History in 1869, and the New York Botanical Garden.
The academy's programs and publications have contributed significantly to scientific discussions and progress over its history, including: in 1876, publishing one of the first studies on environmental pollution; conducting the first-of-its-kind scientific survey and publication The Scientific Survey of Puerto Rico and the Virgin Islands, from 1907 to 1934; holding the first conference and publication of key papers on antibiotics in 1945–46; hosting a conference and publishing key papers on the cardiovascular effects of smoking in 1960 and on the effects of asbestos on human health in 1964–65; founding the Women in Science Committee in 1977; convening the world's first major scientific conference on AIDS in 1983–1984; and an early conference on SARS in 2003.
More recent activities have included: annual meetings on machine learning; programs designed to reduce time and costs of Alzheimer's research; programs on the development of the brain from before birth through early childhood; convening the inaugural Summit on Science Enablement for the (United Nations) Sustainable Development Goals in 2017; and convening climate scientists and city planners, industry experts, policymakers, and representatives of NGOs for a conference marking the 10th anniversary of a partnership between the New York City Panel on Climate Change, the City of New York, and Annals of the New York Academy of Sciences. The journal published three volumes (in 2010, 2015, and 2019) of scientific studies on climate change in NYC.
Like most scientific organizations in early 2020, the academy turned resources to programming related to the COVID-19 pandemic, producing over 35 programs on the science of SARS-CoV-2, developments in vaccines and therapies, and lessons on how to prepare for future outbreaks.
The Academy moved to a new facility on the 8th floor of the United States Realty Building (115 Broadway) in May 2023. The new facility is roughly six blocks south of the Academy's original home near Barclay and Broadway.
== Publications ==
Annals of the New York Academy of Sciences (first published as Annals of the Lyceum of Natural History in 1823) is one of the oldest continuously published scientific journals in the United States. Annals is an international science journal published monthly in many areas of science, though predominantly the biological sciences. Each issue presents original research articles and/or commissioned reviews, commentary, and perspective articles. Annals is a hybrid journal—i.e., it is available by subscription from John Wiley & Sons and over 30% of individual papers are freely available via Creative Commons licenses. The journal is rigorously peer-reviewed, and is currently ranked 2020 13 out of 73 journals in the Multidisciplinary Sciences category by the 2020 Journal Citation Reports™ (Clarivate Analytics).
Transactions of the New York Academy of Sciences is a historical publication of the academy. Published as two series (Series 1, volumes 1–16, 1881–1897, and Series 2, volumes 1–41, 1938–1983), Transactions presents scholarly and scientific proceedings of the various Academy scientific “Sections” (e.g., Section of Anthropology, Biology, Physics and Chemistry, Oceanography and Meteorology, Mathematics and Engineering, Geology and Mineralogy, and several others) and of other scientific events and proceedings at the academy.
The Sciences was a popular science magazine published by the academy from 1961 to 2001. It bridged the sciences and culture, winning seven National Magazine Awards.
Over the past 15 years, these seminal publications, as well as the academy's archive, were digitized.
== Programs ==
Frontiers of Science
The New York Academy of Sciences produces domestic and international conferences, convened in-person and virtually, that cover topics including genomic medicine, chemical, and structural biology, drug discovery, computer science, and urban sustainability. The academy's Frontiers of Science programs provide a neutral forum for participants to exchange information on basic and applied research and to discuss the broader role of science, medicine, and technology in society. In addition to programming related to the COVID-19 pandemic, recent conferences have also explored conflicts of interest in the health sciences and medicine, racial bias in science and academia, science denialism, issues in bioethics and law in space travel, and profiles of women in the top echelons of science.
The Global STEM Alliance and the Junior Academy
The Global STEM Alliance offers challenge competitions, supports teachers with professional development, and trains STEM professionals to serve as mentors. The Junior Academy is a community under the New York Academy of the Sciences that aims to connect students ages 13 to 17. Each year, 1,000 students from around the world are selected to be a part of the program and compete in 10-week-long challenges.
The Science Alliance
The Science Alliance supports early-career researchers, providing entrepreneurial opportunities, platforms for cross-cultural personal and professional networking, and learning resources.
Nutrition Program
The New York Academy of Sciences’ Nutrition Science Program supports maternal and child nutrition, and provides leadership in food safety, food security, and the drive to end micronutrient deficiencies.
International Science Reserve
The International Science Reserve (ISR) includes members who provide resources (e.g., genomic sequencing, specialized talent, labs, databases, high performance computing), advice, and support. It is governed by a board of leaders in industry, government, academia, and non-governmental organizations. Partners include IBM, Google, and UL.
The Interstellar Initiative
The Interstellar Initiative, a program developed with the Japan Agency for Medical Research and Development, fosters international and interdisciplinary collaboration between scientists early in their careers. With the guidance of leading senior researchers, teams develop research plans and grant proposals centered in the life sciences. Since 2017, the Interstellar Initiative has supported over 170 early-career scientists and 41 senior scientists as mentors.
== Awards ==
The Blavatnik Awards for Young Scientists were established in 2007 by the Blavatnik Family Foundation. The awards are independently administered by the Academy. The awards, considered the largest unrestricted prize ever created for early-career scientists, are given each year to scientists and engineers 42 years of age and younger. Selection is based on the quality, novelty and impact of research and potential for further significant contributions to science. By the close of 2024, the Blavatnik Awards recognized more than 470 young scientists and engineers in 37 disciplines from 53 countries, and awarded unrestricted cash prizes totaling US$17.2M.
The Tata Transformation Prize, established in 2023 by Tata Sons, awards INR 2 crores (approximately US$240,000) in three categories: food security, sustainability and healthcare to scientists employed by an eligible university, institute or other research organization in India. Awards are made by a confidential jury, independently chosen by the Academy.
Established in 2016, but now paused for a "review and refresh", the Innovators in Science Award, administered by the Academy and sponsored by Takeda Pharmaceutical Company, honors both a promising early-career scientist and an outstanding senior scientist for exceptional research contributions in rotating fields of biomedicine. The winners each receive a US$200,000 prize, intended to support their commitment to innovative research.
== References ==
== Bibliography ==
Douglas Sloan, "Science in New York City, 1867-1907," Isis 71 (March 1980), pp. 35–76.
Simon Baatz, Knowledge, Culture, and Science in the Metropolis: The New York Academy of Sciences, 1817–1970, Annals of the New York Academy of Sciences, New York, NY, 1990, Volume 584
"For Science Academy, Move to World Trade Center Is Like Going Home," The New York Times, October 30, 2006
== External links ==
New York Academy of Sciences official website
"New York Academy of Sciences, The" . New International Encyclopedia. 1905. | Wikipedia/New_York_Academy_of_Sciences |
The physics of a bouncing ball concerns the physical behaviour of bouncing balls, particularly its motion before, during, and after impact against the surface of another body. Several aspects of a bouncing ball's behaviour serve as an introduction to mechanics in high school or undergraduate level physics courses. However, the exact modelling of the behaviour is complex and of interest in sports engineering.
The motion of a ball is generally described by projectile motion (which can be affected by gravity, drag, the Magnus effect, and buoyancy), while its impact is usually characterized through the coefficient of restitution (which can be affected by the nature of the ball, the nature of the impacting surface, the impact velocity, rotation, and local conditions such as temperature and pressure). To ensure fair play, many sports governing bodies set limits on the bounciness of their ball and forbid tampering with the ball's aerodynamic properties. The bounciness of balls has been a feature of sports as ancient as the Mesoamerican ballgame.
== Forces during flight and effect on motion ==
The motion of a bouncing ball obeys projectile motion. Many forces act on a real ball, namely the gravitational force (FG), the drag force due to air resistance (FD), the Magnus force due to the ball's spin (FM), and the buoyant force (FB). In general, one has to use Newton's second law taking all forces into account to analyze the ball's motion:
∑
F
=
m
a
,
F
G
+
F
D
+
F
M
+
F
B
=
m
a
=
m
d
v
d
t
=
m
d
2
r
d
t
2
,
{\displaystyle {\begin{aligned}\sum \mathbf {F} &=m\mathbf {a} ,\\\mathbf {F} _{\text{G}}+\mathbf {F} _{\text{D}}+\mathbf {F} _{\text{M}}+\mathbf {F} _{\text{B}}&=m\mathbf {a} =m{\frac {d\mathbf {v} }{dt}}=m{\frac {d^{2}\mathbf {r} }{dt^{2}}},\end{aligned}}}
where m is the ball's mass. Here, a, v, r represent the ball's acceleration, velocity, and position over time t.
=== Gravity ===
The gravitational force is directed downwards and is equal to
F
G
=
m
g
,
{\displaystyle F_{\text{G}}=mg,}
where m is the mass of the ball, and g is the gravitational acceleration, which on Earth varies between 9.764 m/s2 and 9.834 m/s2. Because the other forces are usually small, the motion is often idealized as being only under the influence of gravity. If only the force of gravity acts on the ball, the mechanical energy will be conserved during its flight. In this idealized case, the equations of motion are given by
a
=
−
g
j
^
,
v
=
v
0
+
a
t
,
r
=
r
0
+
v
0
t
+
1
2
a
t
2
,
{\displaystyle {\begin{aligned}\mathbf {a} &=-g\mathbf {\hat {j}} ,\\\mathbf {v} &=\mathbf {v} _{\text{0}}+\mathbf {a} t,\\\mathbf {r} &=\mathbf {r} _{0}+\mathbf {v} _{0}t+{\frac {1}{2}}\mathbf {a} t^{2},\end{aligned}}}
where a, v, and r denote the acceleration, velocity, and position of the ball, and v0 and r0 are the initial velocity and position of the ball, respectively.
More specifically, if the ball is bounced at an angle θ with the ground, the motion in the x- and y-axes (representing horizontal and vertical motion, respectively) is described by
The equations imply that the maximum height (H) and range (R) and time of flight (T) of a ball bouncing on a flat surface are given by
H
=
v
0
2
2
g
sin
2
(
θ
)
,
R
=
v
0
2
g
sin
(
2
θ
)
,
and
T
=
2
v
0
g
sin
(
θ
)
.
{\displaystyle {\begin{aligned}H&={\frac {v_{0}^{2}}{2g}}\sin ^{2}\left(\theta \right),\\R&={\frac {v_{0}^{2}}{g}}\sin \left(2\theta \right),~{\text{and}}\\T&={\frac {2v_{0}}{g}}\sin \left(\theta \right).\end{aligned}}}
Further refinements to the motion of the ball can be made by taking into account air resistance (and related effects such as drag and wind), the Magnus effect, and buoyancy. Because lighter balls accelerate more readily, their motion tends to be affected more by such forces.
=== Drag ===
Air flow around the ball can be either laminar or turbulent depending on the Reynolds number (Re), defined as:
Re
=
ρ
D
v
μ
,
{\displaystyle {\text{Re}}={\frac {\rho Dv}{\mu }},}
where ρ is the density of air, μ the dynamic viscosity of air, D the diameter of the ball, and v the velocity of the ball through air. At a temperature of 20 °C, ρ = 1.2 kg/m3 and μ = 1.8×10−5 Pa·s.
If the Reynolds number is very low (Re < 1), the drag force on the ball is described by Stokes' law:
F
D
=
6
π
μ
r
v
,
{\displaystyle F_{\text{D}}=6\pi \mu rv,}
where r is the radius of the ball. This force acts in opposition to the ball's direction (in the direction of
−
v
^
{\displaystyle \textstyle -{\hat {\mathbf {v} }}}
). For most sports balls, however, the Reynolds number will be between 104 and 105 and Stokes' law does not apply. At these higher values of the Reynolds number, the drag force on the ball is instead described by the drag equation:
F
D
=
1
2
ρ
C
d
A
v
2
,
{\displaystyle F_{\text{D}}={\frac {1}{2}}\rho C_{\text{d}}Av^{2},}
where Cd is the drag coefficient, and A the cross-sectional area of the ball.
Drag will cause the ball to lose mechanical energy during its flight, and will reduce its range and height, while crosswinds will deflect it from its original path. Both effects have to be taken into account by players in sports such as golf.
=== Magnus effect ===
The spin of the ball will affect its trajectory through the Magnus effect. According to the Kutta–Joukowski theorem, for a spinning sphere with an inviscid flow of air, the Magnus force is equal to
F
M
=
8
3
π
r
3
ρ
ω
v
,
{\displaystyle F_{\text{M}}={\frac {8}{3}}\pi r^{3}\rho \omega v,}
where r is the radius of the ball, ω the angular velocity (or spin rate) of the ball, ρ the density of air, and v the velocity of the ball relative to air. This force is directed perpendicular to the motion and perpendicular to the axis of rotation (in the direction of
ω
^
×
v
^
{\displaystyle \textstyle {\hat {\mathbf {\omega } }}\times {\hat {\mathbf {v} }}}
). The force is directed upwards for backspin and downwards for topspin. In reality, flow is never inviscid, and the Magnus lift is better described by
F
M
=
1
2
ρ
C
L
A
v
2
,
{\displaystyle F_{\text{M}}={\frac {1}{2}}\rho C_{\text{L}}Av^{2},}
where ρ is the density of air, CL the lift coefficient, A the cross-sectional area of the ball, and v the velocity of the ball relative to air. The lift coefficient is a complex factor which depends amongst other things on the ratio rω/v, the Reynolds number, and surface roughness. In certain conditions, the lift coefficient can even be negative, changing the direction of the Magnus force (reverse Magnus effect).
In sports like tennis or volleyball, the player can use the Magnus effect to control the ball's trajectory (e.g. via topspin or backspin) during flight. In golf, the effect is responsible for slicing and hooking which are usually a detriment to the golfer, but also helps with increasing the range of a drive and other shots. In baseball, pitchers use the effect to create curveballs and other special pitches.
Ball tampering is often illegal, and is often at the centre of cricket controversies such as the one between England and Pakistan in August 2006. In baseball, the term 'spitball' refers to the illegal coating of the ball with spit or other substances to alter the aerodynamics of the ball.
=== Buoyancy ===
Any object immersed in a fluid such as water or air will experience an upwards buoyancy. According to Archimedes' principle, this buoyant force is equal to the weight of the fluid displaced by the object. In the case of a sphere, this force is equal to
F
B
=
4
3
π
r
3
ρ
g
.
{\displaystyle F_{\text{B}}={\frac {4}{3}}\pi r^{3}\rho g.}
The buoyant force is usually small compared to the drag and Magnus forces and can often be neglected. However, in the case of a basketball, the buoyant force can amount to about 1.5% of the ball's weight. Since buoyancy is directed upwards, it will act to increase the range and height of the ball.
== Impact ==
When a ball impacts a surface, the surface recoils and vibrates, as does the ball, creating both sound and heat, and the ball loses kinetic energy. Additionally, the impact can impart some rotation to the ball, transferring some of its translational kinetic energy into rotational kinetic energy. This energy loss is usually characterized (indirectly) through the coefficient of restitution (or COR, denoted e):
e
=
−
v
f
−
u
f
v
i
−
u
i
,
{\displaystyle e=-{\frac {v_{\text{f}}-u_{\text{f}}}{v_{\text{i}}-u_{\text{i}}}},}
where vf and vi are the final and initial velocities of the ball, and uf and ui are the final and initial velocities of the impacting surface, respectively. In the specific case where a ball impacts on an immovable surface, the COR simplifies to
e
=
−
v
f
v
i
.
{\displaystyle e=-{\frac {v_{\text{f}}}{v_{\text{i}}}}.}
For a ball dropped against a floor, the COR will therefore vary between 0 (no bounce, total loss of energy) and 1 (perfectly bouncy, no energy loss). A COR value below 0 or above 1 is theoretically possible, but would indicate that the ball went through the surface (e < 0), or that the surface was not "relaxed" when the ball impacted it (e > 1), like in the case of a ball landing on spring-loaded platform.
To analyze the vertical and horizontal components of the motion, the COR is sometimes split up into a normal COR (ey), and tangential COR (ex), defined as
e
y
=
−
v
yf
−
u
yf
v
yi
−
u
yi
,
{\displaystyle e_{\text{y}}=-{\frac {v_{\text{yf}}-u_{\text{yf}}}{v_{\text{yi}}-u_{\text{yi}}}},}
e
x
=
−
(
v
xf
−
r
ω
f
)
−
(
u
xf
−
R
Ω
f
)
(
v
xi
−
r
ω
i
)
−
(
u
xi
−
R
Ω
i
)
,
{\displaystyle e_{\text{x}}=-{\frac {(v_{\text{xf}}-r\omega _{\text{f}})-(u_{\text{xf}}-R\Omega _{\text{f}})}{(v_{\text{xi}}-r\omega _{\text{i}})-(u_{\text{xi}}-R\Omega _{\text{i}})}},}
where r and ω denote the radius and angular velocity of the ball, while R and Ω denote the radius and angular velocity the impacting surface (such as a baseball bat). In particular rω is the tangential velocity of the ball's surface, while RΩ is the tangential velocity of the impacting surface. These are especially of interest when the ball impacts the surface at an oblique angle, or when rotation is involved.
For a straight drop on the ground with no rotation, with only the force of gravity acting on the ball, the COR can be related to several other quantities by:
e
=
|
v
f
v
i
|
=
K
f
K
i
=
U
f
U
i
=
H
f
H
i
=
T
f
T
i
=
g
T
f
2
8
H
i
.
{\displaystyle e=\left|{\frac {v_{\text{f}}}{v_{\text{i}}}}\right|={\sqrt {\frac {K_{\text{f}}}{K_{\text{i}}}}}={\sqrt {\frac {U_{\text{f}}}{U_{\text{i}}}}}={\sqrt {\frac {H_{\text{f}}}{H_{\text{i}}}}}={\frac {T_{\text{f}}}{T_{\text{i}}}}={\sqrt {\frac {gT_{\text{f}}^{2}}{8H_{\text{i}}}}}.}
Here, K and U denote the kinetic and potential energy of the ball, H is the maximum height of the ball, and T is the time of flight of the ball. The 'i' and 'f' subscript refer to the initial (before impact) and final (after impact) states of the ball. Likewise, the energy loss at impact can be related to the COR by
Energy Loss
=
K
i
−
K
f
K
i
×
100
%
=
(
1
−
e
2
)
×
100
%
.
{\displaystyle {\text{Energy Loss}}={\frac {{K_{\text{i}}}-{K_{\text{f}}}}{K_{\text{i}}}}\times 100\%=\left(1-e^{2}\right)\times 100\%.}
The COR of a ball can be affected by several things, mainly
the nature of the impacting surface (e.g. grass, concrete, wire mesh)
the material of the ball (e.g. leather, rubber, plastic)
the pressure inside the ball (if hollow)
the amount of rotation induced in the ball at impact
the impact velocity
External conditions such as temperature can change the properties of the impacting surface or of the ball, making them either more flexible or more rigid. This will, in turn, affect the COR. In general, the ball will deform more at higher impact velocities and will accordingly lose more of its energy, decreasing its COR.
=== Spin and angle of impact ===
Upon impacting the ground, some translational kinetic energy can be converted to rotational kinetic energy and vice versa depending on the ball's impact angle and angular velocity. If the ball moves horizontally at impact, friction will have a "translational" component in the direction opposite to the ball's motion. In the figure, the ball is moving to the right, and thus it will have a translational component of friction pushing the ball to the left. Additionally, if the ball is spinning at impact, friction will have a "rotational" component in the direction opposite to the ball's rotation. On the figure, the ball is spinning clockwise, and the point impacting the ground is moving to the left with respect to the ball's center of mass. The rotational component of friction is therefore pushing the ball to the right. Unlike the normal force and the force of gravity, these frictional forces will exert a torque on the ball, and change its angular velocity (ω).
Three situations can arise:
If a ball is propelled forward with backspin, the translational and rotational friction will act in the same directions. The ball's angular velocity will be reduced after impact, as will its horizontal velocity, and the ball is propelled upwards, possibly even exceeding its original height. It is also possible for the ball to start spinning in the opposite direction, and even bounce backwards.
If a ball is propelled forward with topspin, the translational and rotational friction act will act in opposite directions. What exactly happens depends on which of the two components dominate.
If the ball is spinning much more rapidly than it was moving, rotational friction will dominate. The ball's angular velocity will be reduced after impact, but its horizontal velocity will be increased. The ball will be propelled forward but will not exceed its original height, and will keep spinning in the same direction.
If the ball is moving much more rapidly than it was spinning, translational friction will dominate. The ball's angular velocity will be increased after impact, but its horizontal velocity will be decreased. The ball will not exceed its original height and will keep spinning in the same direction.
If the surface is inclined by some amount θ, the entire diagram would be rotated by θ, but the force of gravity would remain pointing downwards (forming an angle θ with the surface). Gravity would then have a component parallel to the surface, which would contribute to friction, and thus contribute to rotation.
In racquet sports such as table tennis or racquetball, skilled players will use spin (including sidespin) to suddenly alter the ball's direction when it impacts surface, such as the ground or their opponent's racquet. Similarly, in cricket, there are various methods of spin bowling that can make the ball deviate significantly off the pitch.
=== Non-spherical balls ===
The bounce of an oval-shaped ball (such as those used in gridiron football or rugby football) is in general much less predictable than the bounce of a spherical ball. Depending on the ball's alignment at impact, the normal force can act ahead or behind the centre of mass of the ball, and friction from the ground will depend on the alignment of the ball, as well as its rotation, spin, and impact velocity. Where the forces act with respect to the centre of mass of the ball changes as the ball rolls on the ground, and all forces can exert a torque on the ball, including the normal force and the force of gravity. This can cause the ball to bounce forward, bounce back, or sideways. Because it is possible to transfer some rotational kinetic energy into translational kinetic energy, it is even possible for the COR to be greater than 1, or for the forward velocity of the ball to increase upon impact.
=== Multiple stacked balls ===
A popular demonstration involves the bounce of multiple stacked balls. If a tennis ball is stacked on top of a basketball, and the two of them are dropped at the same time, the tennis ball will bounce much higher than it would have if dropped on its own, even exceeding its original release height. The result is surprising as it apparently violates conservation of energy. However, upon closer inspection, the basketball does not bounce as high as it would have if the tennis ball had not been on top of it, and transferred some of its energy into the tennis ball, propelling it to a greater height.
The usual explanation involves considering two separate impacts: the basketball impacting with the floor, and then the basketball impacting with the tennis ball. Assuming perfectly elastic collisions, the basketball impacting the floor at 1 m/s would rebound at 1 m/s. The tennis ball going at 1 m/s would then have a relative impact velocity of 2 m/s, which means it would rebound at 2 m/s relative to the basketball, or 3 m/s relative to the floor, and triple its rebound velocity compared to impacting the floor on its own. This implies that the ball would bounce to 9 times its original height.
In reality, due to inelastic collisions, the tennis ball will increase its velocity and rebound height by a smaller factor, but still will bounce faster and higher than it would have on its own.
While the assumptions of separate impacts is not actually valid (the balls remain in close contact with each other during most of the impact), this model will nonetheless reproduce experimental results with good agreement, and is often used to understand more complex phenomena such as the core collapse of supernovae, or gravitational slingshot manoeuvres.
== Sport regulations ==
Several sports governing bodies regulate the bounciness of a ball through various ways, some direct, some indirect.
AFL: Regulates the gauge pressure of the football to be between 62 kPa and 76 kPa.
FIBA: Regulates the gauge pressure so the basketball bounces between 1035 mm and 1085 mm (bottom of the ball) when it is dropped from a height of 1800 mm (bottom of the ball). This corresponds to a COR between 0.758 and 0.776.
FIFA: Regulates the gauge pressure of the soccer ball to be between of 0.6 atm and 1.1 atm at sea level (61 to 111 kPa).
FIVB: Regulates the gauge pressure of the volleyball to be between 0.30 kgF/cm2 to 0.325 kgF/cm2 (29.4 to 31.9 kPa) for indoor volleyball, and 0.175 kgF/cm2 to 0.225 kgF/cm2 (17.2 to 22.1 kPa) for beach volleyball.
ITF: Regulates the height of the tennis ball bounce when dropped on a "smooth, rigid and horizontal block of high mass". Different types of ball are allowed for different types of surfaces. When dropped from a height of 100 inches (254 cm), the bounce must be 54–60 in (137–152 cm) for Type 1 balls, 53–58 in (135–147 cm) for Type 2 and Type 3 balls, and 48–53 in (122–135 cm) for High Altitude balls. This roughly corresponds to a COR of 0.735–0.775 (Type 1 ball), 0.728–0.762 (Type 2 & 3 balls), and 0.693–0.728 (High Altitude balls) when dropped on the testing surface.
ITTF: Regulates the playing surface so that the table tennis ball bounces approximately 23 cm when dropped from a height of 30 cm. This roughly corresponds to a COR of about 0.876 against the playing surface.
NBA: Regulates the gauge pressure of the basketball to be between 7.5 and 8.5 psi (51.7 to 58.6 kPa).
NFL: Regulates the gauge pressure of the American football to be between 12.5 and 13.5 psi (86 to 93 kPa).
R&A/USGA: Limits the COR of the golf ball directly, which should not exceed 0.83 against a golf club.
The pressure of an American football was at the center of the deflategate controversy. Some sports do not regulate the bouncing properties of balls directly, but instead specify a construction method. In baseball, the introduction of a cork-based ball helped to end the dead-ball era and trigger the live-ball era.
== See also ==
Bouncy ball
List of ball games
Quantum bouncing ball
== Notes ==
== References ==
== Further reading ==
Briggs, L. J. (1945). "Methods for measuring the coefficient of restitution and the spin of a ball". Journal of Research of the National Bureau of Standards. 34 (1): 1–23. doi:10.6028/jres.034.001.
Cross, R. (2011). Physics of Baseball & Softball. Springer. ISBN 978-1-4419-8112-7.
Cross, R. (June 2014). "Physics of bounce". Sydney University.
Cross, R. (2015). "Behaviour of a bouncing ball". Physics Education. 50 (3): 335–341. Bibcode:2015PhyEd..50..335C. doi:10.1088/0031-9120/50/3/335. S2CID 122366736.
Stronge, W. J. (2004). Impact mechanics. Cambridge University Press. ISBN 978-0-521-60289-1.
Erlichson, Herman (1983). "Maximum projectile range with drag and lift, with particular application to golf". American Journal of Physics. 51 (4): 357–362. Bibcode:1983AmJPh..51..357E. doi:10.1119/1.13248. | Wikipedia/Bouncing_ball_dynamics |
A population model is a type of mathematical model that is applied to the study of population dynamics.
== Rationale ==
Models allow a better understanding of how complex interactions and processes work. Modeling of dynamic interactions in nature can provide a manageable way of understanding how numbers change over time or in relation to each other. Many patterns can be noticed by using population modeling as a tool.
Ecological population modeling is concerned with the changes in parameters such as population size and age distribution within a population. This might be due to interactions with the environment, individuals of their own species, or other species.
Population models are used to determine maximum harvest for agriculturists, to understand the dynamics of biological invasions, and for environmental conservation. Population models are also used to understand the spread of parasites, viruses, and disease.
Another way populations models are useful are when species become endangered. Population models can track the fragile species and work and curb the decline. [1]
== History ==
Late 18th-century biologists began to develop techniques in population modeling in order to understand the dynamics of growing and shrinking of all populations of living organisms. Thomas Malthus was one of the first to note that populations grew with a geometric pattern while contemplating the fate of humankind. One of the most basic and milestone models of population growth was the logistic model of population growth formulated by Pierre François Verhulst in 1838. The logistic model takes the shape of a sigmoid curve and describes the growth of a population as exponential, followed by a decrease in growth, and bound by a carrying capacity due to environmental pressures.
Population modeling became of particular interest to biologists in the 20th century as pressure on limited means of sustenance due to increasing human populations in parts of Europe were noticed by biologist like Raymond Pearl. In 1921 Pearl invited physicist Alfred J. Lotka to assist him in his lab. Lotka developed paired differential equations that showed the effect of a parasite on its prey. Mathematician Vito Volterra equated the relationship between two species independent from Lotka. Together, Lotka and Volterra formed the Lotka–Volterra model for competition that applies the logistic equation to two species illustrating competition, predation, and parasitism interactions between species. In 1939 contributions to population modeling were given by Patrick Leslie as he began work in biomathematics. Leslie emphasized the importance of constructing a life table in order to understand the effect that key life history strategies played in the dynamics of whole populations. Matrix algebra was used by Leslie in conjunction with life tables to extend the work of Lotka. Matrix models of populations calculate the growth of a population with life history variables. Later, Robert MacArthur and E. O. Wilson characterized island biogeography. The equilibrium model of island biogeography describes the number of species on an island as an equilibrium of immigration and extinction. The logistic population model, the Lotka–Volterra model of community ecology, life table matrix modeling, the equilibrium model of island biogeography and variations thereof are the basis for ecological population modeling today.
== Equations ==
Logistic growth equation:
d
N
d
t
=
r
N
(
1
−
N
K
)
{\displaystyle {\frac {dN}{dt}}=rN\left(1-{\frac {N}{K}}\right)\,}
Competitive Lotka–Volterra equations:
d
N
1
d
t
=
r
1
N
1
K
1
−
N
1
−
α
N
2
K
1
{\displaystyle {\frac {dN_{1}}{dt}}=r_{1}N_{1}{\frac {K_{1}-N_{1}-\alpha N_{2}}{K_{1}}}\,}
Island biogeography:
S
=
I
P
I
+
E
{\displaystyle S={\frac {IP}{I+E}}}
Species–area relationship:
log
(
S
)
=
log
(
c
)
+
z
log
(
A
)
{\displaystyle \log(S)=\log(c)+z\log(A)\,}
== Examples of individual-based models ==
== See also ==
Population dynamics
Population dynamics of fisheries
Population ecology
Moment closure
== References ==
== External links ==
GreenBoxes code sharing network. Greenboxes (Beta) is a repository for open-source population modeling code. Greenboxes allows users an easy way to share their code and to search for others shared code. | Wikipedia/Population_model |
The Rabinovich–Fabrikant equations are a set of three coupled ordinary differential equations exhibiting chaotic behaviour for certain values of the parameters. They are named after Mikhail Rabinovich and Anatoly Fabrikant, who described them in 1979.
== System description ==
The equations are:
x
˙
=
y
(
z
−
1
+
x
2
)
+
γ
x
{\displaystyle {\dot {x}}=y(z-1+x^{2})+\gamma x\,}
y
˙
=
x
(
3
z
+
1
−
x
2
)
+
γ
y
{\displaystyle {\dot {y}}=x(3z+1-x^{2})+\gamma y\,}
z
˙
=
−
2
z
(
α
+
x
y
)
,
{\displaystyle {\dot {z}}=-2z(\alpha +xy),\,}
where α, γ are constants that control the evolution of the system. For some values of α and γ, the system is chaotic, but for others it tends to a stable periodic orbit.
Danca and Chen note that the Rabinovich–Fabrikant system is difficult to analyse (due to the presence of quadratic and cubic terms) and that different attractors can be obtained for the same parameters by using different step sizes in the integration, see on the right an example of a solution obtained by two different solvers for the same parameter values and initial conditions. Also, recently, a hidden attractor was discovered in the Rabinovich–Fabrikant system.
=== Equilibrium points ===
The Rabinovich–Fabrikant system has five hyperbolic equilibrium points, one at the origin and four dependent on the system parameters α and γ:
x
~
0
=
(
0
,
0
,
0
)
{\displaystyle {\tilde {\mathbf {x} }}_{0}=(0,0,0)}
x
~
1
,
2
=
(
±
q
−
,
∓
α
q
−
,
1
−
(
1
−
γ
α
)
q
−
2
)
{\displaystyle {\tilde {\mathbf {x} }}_{1,2}=\left(\pm q_{-},\mp {\frac {\alpha }{q_{-}}},1-\left(1-{\frac {\gamma }{\alpha }}\right)q_{-}^{2}\right)}
x
~
3
,
4
=
(
±
q
+
,
∓
α
q
+
,
1
−
(
1
−
γ
α
)
q
+
2
)
{\displaystyle {\tilde {\mathbf {x} }}_{3,4}=\left(\pm q_{+},\mp {\frac {\alpha }{q_{+}}},1-\left(1-{\frac {\gamma }{\alpha }}\right)q_{+}^{2}\right)}
where
q
±
=
1
±
1
−
γ
α
(
1
−
3
γ
4
α
)
2
(
1
−
3
γ
4
α
)
{\displaystyle q_{\pm }={\sqrt {\frac {1\pm {\sqrt {1-\gamma \alpha \left(1-{\frac {3\gamma }{4\alpha }}\right)}}}{2\left(1-{\frac {3\gamma }{4\alpha }}\right)}}}}
These equilibrium points only exist for certain values of α and γ > 0.
=== γ = 0.87, α = 1.1 ===
An example of chaotic behaviour is obtained for γ = 0.87 and α = 1.1 with initial conditions of (−1, 0, 0.5), see trajectory on the right. The correlation dimension was found to be 2.19 ± 0.01. The Lyapunov exponents, λ are approximately 0.1981, 0, −0.6581 and the Kaplan–Yorke dimension, DKY ≈ 2.3010
=== γ = 0.1 ===
Danca and Romera showed that for γ = 0.1, the system is chaotic for α = 0.98, but progresses on a stable limit cycle for α = 0.14.
== See also ==
List of chaotic maps
== References ==
== External links ==
Weisstein, Eric W. "Rabinovich–Fabrikant Equation." From MathWorld—A Wolfram Web Resource.
Chaotics Models a more appropriate approach to the chaotic graph of the system "Rabinovich–Fabrikant Equation" | Wikipedia/Rabinovich–Fabrikant_equations |
Chaos: Making a New Science is a debut non-fiction book by James Gleick that initially introduced the principles and early development of the chaos theory to the public. It was a finalist for the National Book Award and the Pulitzer Prize in 1987, and was shortlisted for the Science Book Prize in 1989. The book was published on October 29, 1987 by Viking Books.
== Overview ==
Chaos: Making a New Science was the first popular book about chaos theory. It describes the Mandelbrot set, Julia sets, and Lorenz attractors without using complicated mathematics. It portrays the efforts of dozens of scientists whose separate work contributed to the developing field. The text remains in print and is widely used as an introduction to the topic for the mathematical layperson. The book approaches the history of chaos theory chronologically, starting with Edward Norton Lorenz and the butterfly effect, through Mitchell Feigenbaum, and ending with more modern applications.
The book covers chaos theory under the lens of four themes: sensitive dependence on initial conditions, self-similarity, universality, and nonlinearity.
An enhanced ebook edition was released by Open Road Media in 2011, adding embedded video and hyperlinked notes.
== Reception ==
Robert Sapolsky said, "Chaos is the first book since Baby Beluga where I've gotten to the last page and immediately started reading it over again from the front: I've found this to be the most influential book in my thinking about science since college."
Freeman Dyson praised the book for its popular account but critiqued the omitting of the earlier work of Dame Mary L. Cartwright and J. E. Littlewood in forming the foundation of chaos theory.
== References ==
== Further reading ==
Devaney, Robert L. (November 1989). "Review of Chaos: Making a New Science". The College Mathematics Journal. 20 (5): 458–459. doi:10.2307/2686940. ISSN 0746-8342. JSTOR 2686940.
Gans, Joshua (December 1989). "Chaos: Making a New Science by James Gleick (Cardinal, London, 1989) pp. 352, $14.99, ISBN 0-7474-0413-5". Prometheus. 7 (2): 412–415. doi:10.1080/08109028908629102. ISSN 0810-9028.
Balazs, Nandor (March 1989). "Chaos: Making a New Science. James Gleick". The Quarterly Review of Biology. 64 (1): 112–113. doi:10.1086/416224. ISSN 0033-5770.
Lewis, Michael (1989). "Review of Chaos: Making a New Science". Human Development. 32 (3/4): 241–244. ISSN 0018-716X. JSTOR 26767401.
Loevinger, Lee (Summer 1988). "Review of Chaos: Making a New Science". Jurimetrics. 28 (4): 505–509. ISSN 0897-1277. JSTOR 29762101.
Friedrich, Paul (Winter 1988). "Eerie Chaos and Eerier Order". Journal of Anthropological Research. 44 (4): 435–444. doi:10.1086/jar.44.4.3630508. ISSN 0091-7710. JSTOR 3630508. S2CID 171222927.
Balazs, Nandor (March 1989). "Review of Chaos: Making a New Science". The Quarterly Review of Biology. 64 (1): 112–113. doi:10.1086/416224. ISSN 0033-5770. JSTOR 2831779.
Rucker, Rudy (1 November 1987). "Patterns of Disorder". The Washington Post. Retrieved 21 December 2020.
Shlesinger, Michael F. (March 1988). "Book review: Chaos: Making a new science". Journal of Statistical Physics. 50 (5–6): 1285–1286. Bibcode:1988JSP....50.1285S. doi:10.1007/BF01019170. ISSN 0022-4715. S2CID 122110686.
Kendig, Frank (1987-10-15). "Books: Third Scientific Revolution of the Century (Published 1987)". The New York Times. ISSN 0362-4331. Retrieved 2020-12-22.
Glazier, James; Gunaratne, Gemunu (February 1988). "Chaos: Making a New Science". Physics Today. 41 (2): 79. Bibcode:1988PhT....41b..79G. doi:10.1063/1.2811320. ISSN 0031-9228.
Pepinsky, Hal (Spring 1990). "Reproducing Violence: A Review Essay". Social Justice. 17 (1 (39)): 155–172. ISSN 1043-1578. JSTOR 29766530.
Hilborn, Robert C. (November 1988). "Chaos, Making a New Science". American Journal of Physics. 56 (11): 1053–1054. Bibcode:1988AmJPh..56.1053G. doi:10.1119/1.15345. ISSN 0002-9505.
Dritschel, D. G. (July 1990). "Chaos: Making a New Science. By J. Gleick . Viking, 1987. 352 pp. $19.95 (hardback); Cardinal, 1988. £5.99 (paperback)". Journal of Fluid Mechanics. 216: 657–658. Bibcode:1990JFM...216..657D. doi:10.1017/S002211209021057X. ISSN 0022-1120. S2CID 121316639.
Radzicki, Michael J. (Winter 1989). "Chaos: Making a new science James Gleick New York: Viking, 1987". System Dynamics Review. 5 (1): 90–91. doi:10.1002/sdr.4260050111.
Balachandran, Balakumar; Hogan, John (June 1999). "Featured Review: So You Have Been Asked to Give a Lecture Course on the Applications of Nonlinear Dynamics..." SIAM Review. 41 (2): 375–382. ISSN 0036-1445. JSTOR 2653080.
"Chaos: Making a new science". Long Range Planning. 22 (5): 152. October 1989. doi:10.1016/0024-6301(89)90186-6.
"Book Reviews". Journal of the American Psychoanalytic Association. 40 (3): 845–946. June 1992. doi:10.1177/000306519204000308. ISSN 0003-0651. S2CID 221013603.
Meisel, Martin (Spring 1988). "Review of Chaos: Making a New Science". The Wilson Quarterly. 12 (2): 138–140. ISSN 0363-3276. JSTOR 40257307.
Bolch, Ben W. (January 1989). "Review of Chaos: Making a New Science". Southern Economic Journal. 55 (3): 779–780. doi:10.2307/1059589. ISSN 0038-4038. JSTOR 1059589.
Mahncke, Frank C. (Summer 1988). "Review of Chaos: Making a New Science". Naval War College Review. 41 (3): 118–119. ISSN 0028-1484. JSTOR 44640030.
Artigiani, Robert (Winter 1990). "Review of Chaos: Making A New Science". Naval War College Review. 43 (1): 133–136. ISSN 0028-1484. JSTOR 44638368.
"Chaos: 2the Making of a New Science". Publishers Weekly. 20 October 1987. Retrieved 2020-12-22.
"Chaos: Making a New Science". Publishers Weekly. December 1988. Retrieved 2020-12-22.
Burns, David (22 November 1987). "Computer 'Chaos'". Chicago Tribune. Retrieved 2020-12-22.
== External links ==
Excerpts
Selection from the prologue Archived 2007-02-02 at the Wayback Machine
Website of James Gleick | Wikipedia/Chaos:_Making_a_New_Science |
The Duffing equation (or Duffing oscillator), named after Georg Duffing (1861–1944), is a non-linear second-order differential equation used to model certain damped and driven oscillators. The equation is given by
x
¨
+
δ
x
˙
+
α
x
+
β
x
3
=
γ
cos
(
ω
t
)
,
{\displaystyle {\ddot {x}}+\delta {\dot {x}}+\alpha x+\beta x^{3}=\gamma \cos(\omega t),}
where the (unknown) function
x
=
x
(
t
)
{\displaystyle x=x(t)}
is the displacement at time t,
x
˙
{\displaystyle {\dot {x}}}
is the first derivative of
x
{\displaystyle x}
with respect to time, i.e. velocity, and
x
¨
{\displaystyle {\ddot {x}}}
is the second time-derivative of
x
,
{\displaystyle x,}
i.e. acceleration. The numbers
δ
,
{\displaystyle \delta ,}
α
,
{\displaystyle \alpha ,}
β
,
{\displaystyle \beta ,}
γ
{\displaystyle \gamma }
and
ω
{\displaystyle \omega }
are given constants.
The equation describes the motion of a damped oscillator with a more complex potential than in simple harmonic motion (which corresponds to the case
β
=
δ
=
0
{\displaystyle \beta =\delta =0}
); in physical terms, it models, for example, an elastic pendulum whose spring's stiffness does not exactly obey Hooke's law.
The Duffing equation is an example of a dynamical system that exhibits chaotic behavior. Moreover, the Duffing system presents in the frequency response the jump resonance phenomenon that is a sort of frequency hysteresis behaviour.
== Parameters ==
The parameters in the above equation are:
δ
{\displaystyle \delta }
controls the amount of damping,
α
{\displaystyle \alpha }
controls the linear stiffness,
β
{\displaystyle \beta }
controls the amount of non-linearity in the restoring force; if
β
=
0
,
{\displaystyle \beta =0,}
the Duffing equation describes a damped and driven simple harmonic oscillator,
γ
{\displaystyle \gamma }
is the amplitude of the periodic driving force; if
γ
=
0
{\displaystyle \gamma =0}
the system is without a driving force, and
ω
{\displaystyle \omega }
is the angular frequency of the periodic driving force.
The Duffing equation can be seen as describing the oscillations of a mass attached to a nonlinear spring and a linear damper. The restoring force provided by the nonlinear spring is then
α
x
+
β
x
3
.
{\displaystyle \alpha x+\beta x^{3}.}
When
α
>
0
{\displaystyle \alpha >0}
and
β
>
0
{\displaystyle \beta >0}
the spring is called a hardening spring. Conversely, for
β
<
0
{\displaystyle \beta <0}
it is a softening spring (still with
α
>
0
{\displaystyle \alpha >0}
). Consequently, the adjectives hardening and softening are used with respect to the Duffing equation in general, dependent on the values of
β
{\displaystyle \beta }
(and
α
{\displaystyle \alpha }
).
The number of parameters in the Duffing equation can be reduced by two through scaling (in accord with the Buckingham π theorem), e.g. the excursion
x
{\displaystyle x}
and time
t
{\displaystyle t}
can be scaled as:
τ
=
t
α
{\displaystyle \tau =t{\sqrt {\alpha }}}
and
y
=
x
α
/
γ
,
{\displaystyle y=x\alpha /\gamma ,}
assuming
α
{\displaystyle \alpha }
is positive (other scalings are possible for different ranges of the parameters, or for different emphasis in the problem studied). Then:
y
¨
+
2
η
y
˙
+
y
+
ε
y
3
=
cos
(
σ
τ
)
,
{\displaystyle {\ddot {y}}+2\eta \,{\dot {y}}+y+\varepsilon \,y^{3}=\cos(\sigma \tau ),}
where
η
=
δ
2
α
,
{\displaystyle \eta ={\frac {\delta }{2{\sqrt {\alpha }}}},}
ε
=
β
γ
2
α
3
,
{\displaystyle \varepsilon ={\frac {\beta \gamma ^{2}}{\alpha ^{3}}},}
and
σ
=
ω
α
.
{\displaystyle \sigma ={\frac {\omega }{\sqrt {\alpha }}}.}
The dots denote differentiation of
y
(
τ
)
{\displaystyle y(\tau )}
with respect to
τ
.
{\displaystyle \tau .}
This shows that the solutions to the forced and damped Duffing equation can be described in terms of the three parameters (
ε
{\displaystyle \varepsilon }
,
η
{\displaystyle \eta }
, and
σ
{\displaystyle \sigma }
) and two initial conditions (i.e. for
y
(
t
0
)
{\displaystyle y(t_{0})}
and
y
˙
(
t
0
)
{\displaystyle {\dot {y}}(t_{0})}
).
== Methods of solution ==
In general, the Duffing equation does not admit an exact symbolic solution. However, many approximate methods work well:
Expansion in a Fourier series may provide an equation of motion to arbitrary precision.
The
x
3
{\displaystyle x^{3}}
term, also called the Duffing term, can be approximated as small and the system treated as a perturbed simple harmonic oscillator.
The Frobenius method yields a complex but workable solution.
Any of the various numeric methods such as Euler's method and Runge–Kutta methods can be used.
The homotopy analysis method (HAM) has also been reported for obtaining approximate solutions of the Duffing equation, also for strong nonlinearity.
In the special case of the undamped (
δ
=
0
{\displaystyle \delta =0}
) and undriven (
γ
=
0
{\displaystyle \gamma =0}
) Duffing equation, an exact solution can be obtained using Jacobi's elliptic functions.
== Boundedness of the solution for the unforced oscillator ==
=== Undamped oscillator ===
Multiplication of the undamped and unforced Duffing equation,
γ
=
δ
=
0
,
{\displaystyle \gamma =\delta =0,}
with
x
˙
{\displaystyle {\dot {x}}}
gives:
x
˙
(
x
¨
+
α
x
+
β
x
3
)
=
0
⟹
d
d
t
[
1
2
(
x
˙
)
2
+
1
2
α
x
2
+
1
4
β
x
4
]
=
0
⟹
1
2
(
x
˙
)
2
+
1
2
α
x
2
+
1
4
β
x
4
=
H
,
{\displaystyle {\begin{aligned}&{\dot {x}}\left({\ddot {x}}+\alpha x+\beta x^{3}\right)=0\\[1ex]\Longrightarrow {}&{\frac {\mathrm {d} }{\mathrm {d} t}}\left[{\frac {1}{2}}\left({\dot {x}}\right)^{2}+{\frac {1}{2}}\alpha x^{2}+{\frac {1}{4}}\beta x^{4}\right]=0\\[1ex]\Longrightarrow {}&{\frac {1}{2}}\left({\dot {x}}\right)^{2}+{\frac {1}{2}}\alpha x^{2}+{\frac {1}{4}}\beta x^{4}=H,\end{aligned}}}
with H a constant. The value of H is determined by the initial conditions
x
(
0
)
{\displaystyle x(0)}
and
x
˙
(
0
)
.
{\displaystyle {\dot {x}}(0).}
The substitution
y
=
x
˙
{\displaystyle y={\dot {x}}}
in H shows that the system is Hamiltonian:
x
˙
=
+
∂
H
∂
y
,
y
˙
=
−
∂
H
∂
x
⟹
H
=
1
2
y
2
+
1
2
α
x
2
+
1
4
β
x
4
.
{\displaystyle {\begin{aligned}&{\dot {x}}=+{\frac {\partial H}{\partial y}},\qquad {\dot {y}}=-{\frac {\partial H}{\partial x}}\\[1ex]\Longrightarrow {}&H={\tfrac {1}{2}}y^{2}+{\tfrac {1}{2}}\alpha x^{2}+{\tfrac {1}{4}}\beta x^{4}.\end{aligned}}}
When both
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
are positive, the solution is bounded:
|
x
|
≤
2
H
/
α
and
|
x
˙
|
≤
2
H
,
{\displaystyle |x|\leq {\sqrt {2H/\alpha }}\qquad {\text{ and }}\qquad |{\dot {x}}|\leq {\sqrt {2H}},}
with the Hamiltonian H being positive. This bound on
x
{\displaystyle x}
comes from dropping the term with
β
{\displaystyle \beta }
. Including it gives a smaller but more complicated bound, by solving
(
β
/
4
)
x
4
+
(
α
/
2
)
x
2
−
H
=
0
{\displaystyle (\beta /4)x^{4}+(\alpha /2)x^{2}-H=0}
, a quadratic equation for
x
2
{\displaystyle x^{2}}
.
=== Damped oscillator ===
Similarly, the damped oscillator converges globally, by Lyapunov function method
x
˙
(
x
¨
+
δ
x
˙
+
α
x
+
β
x
3
)
=
0
⟹
d
d
t
[
1
2
(
x
˙
)
2
+
1
2
α
x
2
+
1
4
β
x
4
]
=
−
δ
(
x
˙
)
2
⟹
d
H
d
t
=
−
δ
(
x
˙
)
2
≤
0
,
{\displaystyle {\begin{aligned}&{\dot {x}}\left({\ddot {x}}+\delta {\dot {x}}+\alpha x+\beta x^{3}\right)=0\\[1ex]\Longrightarrow {}&{\frac {\mathrm {d} }{\mathrm {d} t}}\left[{\frac {1}{2}}\left({\dot {x}}\right)^{2}+{\frac {1}{2}}\alpha x^{2}+{\frac {1}{4}}\beta x^{4}\right]=-\delta \,\left({\dot {x}}\right)^{2}\\[1ex]\Longrightarrow {}&{\frac {\mathrm {d} H}{\mathrm {d} t}}=-\delta \,\left({\dot {x}}\right)^{2}\leq 0,\end{aligned}}}
since
δ
≥
0
{\displaystyle \delta \geq 0}
for damping. Without forcing the damped Duffing oscillator will end up at (one of) its stable equilibrium point(s). The equilibrium points, stable and unstable, are at
α
x
+
β
x
3
=
0.
{\displaystyle \alpha x+\beta x^{3}=0.}
If
α
>
0
{\displaystyle \alpha >0}
the stable equilibrium is at
x
=
0.
{\displaystyle x=0.}
If
α
<
0
{\displaystyle \alpha <0}
and
β
>
0
{\displaystyle \beta >0}
the stable equilibria are at
x
=
+
−
α
/
β
{\textstyle x=+{\sqrt {-\alpha /\beta }}}
and
x
=
−
−
α
/
β
.
{\textstyle x=-{\sqrt {-\alpha /\beta }}.}
== Frequency response ==
The forced Duffing oscillator with cubic nonlinearity is described by the following ordinary differential equation:
x
¨
+
δ
x
˙
+
α
x
+
β
x
3
=
γ
cos
(
ω
t
)
.
{\displaystyle {\ddot {x}}+\delta {\dot {x}}+\alpha x+\beta x^{3}=\gamma \cos(\omega t).}
The frequency response of this oscillator describes the amplitude
z
{\displaystyle z}
of steady state response of the equation (i.e.
x
(
t
)
{\displaystyle x(t)}
) at a given frequency of excitation
ω
.
{\displaystyle \omega .}
For a linear oscillator with
β
=
0
,
{\displaystyle \beta =0,}
the frequency response is also linear. However, for a nonzero cubic coefficient
β
{\displaystyle \beta }
, the frequency response becomes nonlinear. Depending on the type of nonlinearity, the Duffing oscillator can show hardening, softening or mixed hardening–softening frequency response. Anyway, using the homotopy analysis method or harmonic balance, one can derive a frequency response equation in the following form:
[
(
ω
2
−
α
−
3
4
β
z
2
)
2
+
(
δ
ω
)
2
]
z
2
=
γ
2
.
{\displaystyle \left[\left(\omega ^{2}-\alpha -{\tfrac {3}{4}}\beta z^{2}\right)^{2}+\left(\delta \omega \right)^{2}\right]\,z^{2}=\gamma ^{2}.}
For the parameters of the Duffing equation, the above algebraic equation gives the steady state oscillation amplitude
z
{\displaystyle z}
at a given excitation frequency.
=== Graphically solving for frequency response ===
We may graphically solve for
z
2
{\displaystyle z^{2}}
as the intersection of two curves in the
(
z
2
,
y
)
{\displaystyle (z^{2},y)}
plane:
{
y
=
(
ω
2
−
α
−
3
4
β
z
2
)
2
+
(
δ
ω
)
2
y
=
γ
2
z
2
{\displaystyle {\begin{cases}y=\left(\omega ^{2}-\alpha -{\frac {3}{4}}\beta z^{2}\right)^{2}+\left(\delta \omega \right)^{2}\\[1ex]y={\dfrac {\gamma ^{2}}{z^{2}}}\end{cases}}}
For fixed
α
,
δ
,
γ
{\displaystyle \alpha ,\delta ,\gamma }
, the second curve is a fixed hyperbola in the first quadrant. The first curve is a parabola with shape
y
=
9
16
β
2
(
z
2
)
2
{\textstyle y={\tfrac {9}{16}}\beta ^{2}(z^{2})^{2}}
, and apex at location
(
4
3
β
(
ω
2
−
α
)
,
δ
2
ω
2
)
{\textstyle ({\tfrac {4}{3\beta }}(\omega ^{2}-\alpha ),\delta ^{2}\omega ^{2})}
. If we fix
β
{\displaystyle \beta }
and vary
ω
{\displaystyle \omega }
, then the apex of the parabola moves along the line
y
=
3
4
β
δ
2
(
z
2
)
+
δ
2
α
{\textstyle y={\tfrac {3}{4}}\beta \delta ^{2}(z^{2})+\delta ^{2}\alpha }
.
Graphically, then, we see that if
β
{\displaystyle \beta }
is a large positive number, then as
ω
{\displaystyle \omega }
varies, the parabola intersects the hyperbola at one point, then three points, then one point again. Similarly we can analyze the case when
β
{\displaystyle \beta }
is a large negative number.
=== Jumps ===
For certain ranges of the parameters in the Duffing equation, the frequency response may no longer be a single-valued function of forcing frequency
ω
.
{\displaystyle \omega .}
For a hardening spring oscillator (
α
>
0
{\displaystyle \alpha >0}
and large enough positive
β
>
β
c
+
>
0
{\displaystyle \beta >\beta _{c+}>0}
) the frequency response overhangs to the high-frequency side, and to the low-frequency side for the softening spring oscillator (
α
>
0
{\displaystyle \alpha >0}
and
β
<
β
c
−
<
0
{\displaystyle \beta <\beta _{c-}<0}
). The lower overhanging side is unstable – i.e. the dashed-line parts in the figures of the frequency response – and cannot be realized for a sustained time. Consequently, the jump phenomenon shows up:
when the angular frequency
ω
{\displaystyle \omega }
is slowly increased (with other parameters fixed), the response amplitude
z
{\displaystyle z}
drops at A suddenly to B,
if the frequency
ω
{\displaystyle \omega }
is slowly decreased, then at C the amplitude jumps up to D, thereafter following the upper branch of the frequency response.
The jumps A–B and C–D do not coincide, so the system shows hysteresis depending on the frequency sweep direction.
=== Transition to chaos ===
The above analysis assumed that the base frequency response dominates (necessary for performing harmonic balance), and higher frequency responses are negligible. This assumption fails to hold when the forcing is sufficiently strong. Higher order harmonics cannot be neglected, and the dynamics become chaotic. There are different possible transitions to chaos, most commonly by successive period doubling.
== Examples ==
Some typical examples of the time series and phase portraits of the Duffing equation, showing the appearance of subharmonics through period-doubling bifurcation – as well chaotic behavior – are shown in the figures below. The forcing amplitude increases from
γ
=
0.20
{\displaystyle \gamma =0.20}
to
γ
=
0.65
{\displaystyle \gamma =0.65}
. The other parameters have the values:
α
=
−
1
{\displaystyle \alpha =-1}
,
β
=
+
1
{\displaystyle \beta =+1}
,
δ
=
0.3
{\displaystyle \delta =0.3}
and
ω
=
1.2
{\displaystyle \omega =1.2}
. The initial conditions are
x
(
0
)
=
1
{\displaystyle x(0)=1}
and
x
˙
(
0
)
=
0.
{\displaystyle {\dot {x}}(0)=0.}
The red dots in the phase portraits are at times
t
{\displaystyle t}
which are an integer multiple of the period
T
=
2
π
/
ω
{\displaystyle T=2\pi /\omega }
.
== References ==
=== Citations ===
=== Bibliography ===
== External links ==
Duffing oscillator on Scholarpedia
MathWorld page
Pchelintsev, A. N.; Ahmad, S. (2020). "Solution of the Duffing equation by the power series method" (PDF). Transactions of TSTU. 26 (1): 118–123. | Wikipedia/Duffing_equation |
A number of different Markov models of DNA sequence evolution have been proposed. These substitution models differ in terms of the parameters used to describe the rates at which one nucleotide replaces another during evolution. These models are frequently used in molecular phylogenetic analyses. In particular, they are used during the calculation of likelihood of a tree (in Bayesian and maximum likelihood approaches to tree estimation) and they are used to estimate the evolutionary distance between sequences from the observed differences between the sequences.
== Introduction ==
These models are phenomenological descriptions of the evolution of DNA as a string of four discrete states. These Markov models do not explicitly depict the mechanism of mutation nor the action of natural selection. Rather they describe the relative rates of different changes. For example, mutational biases and purifying selection favoring conservative changes are probably both responsible for the relatively high rate of transitions compared to transversions in evolving sequences. However, the Kimura (K80) model described below only attempts to capture the effect of both forces in a parameter that reflects the relative rate of transitions to transversions.
Evolutionary analyses of sequences are conducted on a wide variety of time scales. Thus, it is convenient to express these models in terms of the instantaneous rates of change between different states (the Q matrices below). If we are given a starting (ancestral) state at one position, the model's Q matrix and a branch length expressing the expected number of changes to have occurred since the ancestor, then we can derive the probability of the descendant sequence having each of the four states. The mathematical details of this transformation from rate-matrix to probability matrix are described in the mathematics of substitution models section of the substitution model page. By expressing models in terms of the instantaneous rates of change we can avoid estimating a large numbers of parameters for each branch on a phylogenetic tree (or each comparison if the analysis involves many pairwise sequence comparisons).
The models described on this page describe the evolution of a single site within a set of sequences. They are often used for analyzing the evolution of an entire locus by making the simplifying assumption that different sites evolve independently and are identically distributed. This assumption may be justifiable if the sites can be assumed to be evolving neutrally. If the primary effect of natural selection on the evolution of the sequences is to constrain some sites, then models of among-site rate-heterogeneity can be used. This approach allows one to estimate only one matrix of relative rates of substitution, and another set of parameters describing the variance in the total rate of substitution across sites.
== DNA evolution as a continuous-time Markov chain ==
=== Continuous-time Markov chains ===
Continuous-time Markov chains have the usual transition matrices
which are, in addition, parameterized by time,
t
{\displaystyle t}
. Specifically, if
E
1
,
E
2
,
E
3
,
E
4
{\displaystyle E_{1},E_{2},E_{3},E_{4}}
are the states, then the transition matrix
P
(
t
)
=
(
P
i
j
(
t
)
)
{\displaystyle P(t)={\big (}P_{ij}(t){\big )}}
where each individual entry,
P
i
j
(
t
)
{\displaystyle P_{ij}(t)}
refers to the probability that state
E
i
{\displaystyle E_{i}}
will change to state
E
j
{\displaystyle E_{j}}
in time
t
{\displaystyle t}
.
Example: We would like to model the substitution process in DNA sequences (i.e. Jukes–Cantor, Kimura, etc.) in a continuous-time fashion. The corresponding transition matrices will look like:
P
(
t
)
=
(
p
A
A
(
t
)
p
A
G
(
t
)
p
A
C
(
t
)
p
A
T
(
t
)
p
G
A
(
t
)
p
G
G
(
t
)
p
G
C
(
t
)
p
G
T
(
t
)
p
C
A
(
t
)
p
C
G
(
t
)
p
C
C
(
t
)
p
C
T
(
t
)
p
T
A
(
t
)
p
T
G
(
t
)
p
T
C
(
t
)
p
T
T
(
t
)
)
{\displaystyle P(t)={\begin{pmatrix}p_{\mathrm {AA} }(t)&p_{\mathrm {AG} }(t)&p_{\mathrm {AC} }(t)&p_{\mathrm {AT} }(t)\\p_{\mathrm {GA} }(t)&p_{\mathrm {GG} }(t)&p_{\mathrm {GC} }(t)&p_{\mathrm {GT} }(t)\\p_{\mathrm {CA} }(t)&p_{\mathrm {CG} }(t)&p_{\mathrm {CC} }(t)&p_{\mathrm {CT} }(t)\\p_{\mathrm {TA} }(t)&p_{\mathrm {TG} }(t)&p_{\mathrm {TC} }(t)&p_{\mathrm {TT} }(t)\end{pmatrix}}}
where the top-left and bottom-right 2 × 2 blocks correspond to transition probabilities and the top-right and bottom-left 2 × 2 blocks corresponds to transversion probabilities.
Assumption: If at some time
t
0
{\displaystyle t_{0}}
, the Markov chain is in state
E
i
{\displaystyle E_{i}}
, then the probability that at time
t
0
+
t
{\displaystyle t_{0}+t}
, it will be in state
E
j
{\displaystyle E_{j}}
depends only upon
i
{\displaystyle i}
,
j
{\displaystyle j}
and
t
{\displaystyle t}
. This then allows us to write that probability as
p
i
j
(
t
)
{\displaystyle p_{ij}(t)}
.
Theorem: Continuous-time transition matrices satisfy:
P
(
t
+
τ
)
=
P
(
t
)
P
(
τ
)
{\displaystyle P(t+\tau )=P(t)P(\tau )}
Note: There is here a possible confusion between two meanings of the word transition. (i) In the context of Markov chains, transition is the general term for the change between two states. (ii) In the context of nucleotide changes in DNA sequences, transition is a specific term for the exchange between either the two purines (A ↔ G) or the two pyrimidines (C ↔ T) (for additional details, see the article about transitions in genetics). By contrast, an exchange between one purine and one pyrimidine is called a transversion.
=== Deriving the dynamics of substitution ===
Consider a DNA sequence of fixed length m evolving in time by base replacement. Assume that the processes followed by the m sites are Markovian independent, identically distributed and that the process is constant over time. For a particular site, let
E
=
{
A
,
G
,
C
,
T
}
{\displaystyle {\mathcal {E}}=\{A,\,G,\,C,\,T\}}
be the set of possible states for the site, and
p
(
t
)
=
(
p
A
(
t
)
,
p
G
(
t
)
,
p
C
(
t
)
,
p
T
(
t
)
)
{\displaystyle \mathbf {p} (t)=(p_{A}(t),\,p_{G}(t),\,p_{C}(t),\,p_{T}(t))}
their respective probabilities at time
t
{\displaystyle t}
. For two distinct
x
,
y
∈
E
{\displaystyle x,y\in {\mathcal {E}}}
, let
μ
x
y
{\displaystyle \mu _{xy}\ }
be the transition rate from state
x
{\displaystyle x}
to state
y
{\displaystyle y}
. Similarly, for any
x
{\displaystyle x}
, let the total rate of change from
x
{\displaystyle x}
be
μ
x
=
∑
y
≠
x
μ
x
y
.
{\displaystyle \mu _{x}=\sum _{y\neq x}\mu _{xy}\,.}
The changes in the probability distribution
p
A
(
t
)
{\displaystyle p_{A}(t)}
for small increments of time
Δ
t
{\displaystyle \Delta t}
are given by
p
A
(
t
+
Δ
t
)
=
p
A
(
t
)
−
p
A
(
t
)
μ
A
Δ
t
+
∑
x
≠
A
p
x
(
t
)
μ
x
A
Δ
t
.
{\displaystyle p_{A}(t+\Delta t)=p_{A}(t)-p_{A}(t)\mu _{A}\Delta t+\sum _{x\neq A}p_{x}(t)\mu _{xA}\Delta t\,.}
In other words, (in frequentist language), the frequency of
A
{\displaystyle A}
's at time
t
+
Δ
t
{\displaystyle t+\Delta t}
is equal to the frequency at time
t
{\displaystyle t}
minus the frequency of the lost
A
{\displaystyle A}
's plus the frequency of the newly created
A
{\displaystyle A}
's.
Similarly for the probabilities
p
G
(
t
)
{\displaystyle p_{G}(t)}
,
p
C
(
t
)
{\displaystyle p_{C}(t)}
and
p
T
(
t
)
{\displaystyle p_{T}(t)}
. These equations can be written compactly as
p
(
t
+
Δ
t
)
=
p
(
t
)
+
p
(
t
)
Q
Δ
t
,
{\displaystyle \mathbf {p} (t+\Delta t)=\mathbf {p} (t)+\mathbf {p} (t)Q\Delta t\,,}
where
Q
=
(
−
μ
A
μ
A
G
μ
A
C
μ
A
T
μ
G
A
−
μ
G
μ
G
C
μ
G
T
μ
C
A
μ
C
G
−
μ
C
μ
C
T
μ
T
A
μ
T
G
μ
T
C
−
μ
T
)
{\displaystyle Q={\begin{pmatrix}-\mu _{A}&\mu _{AG}&\mu _{AC}&\mu _{AT}\\\mu _{GA}&-\mu _{G}&\mu _{GC}&\mu _{GT}\\\mu _{CA}&\mu _{CG}&-\mu _{C}&\mu _{CT}\\\mu _{TA}&\mu _{TG}&\mu _{TC}&-\mu _{T}\end{pmatrix}}}
is known as the rate matrix. Note that, by definition, the sum of the entries in each row of
Q
{\displaystyle Q}
is equal to zero. It follows that
p
′
(
t
)
=
p
(
t
)
Q
.
{\displaystyle \mathbf {p} '(t)=\mathbf {p} (t)Q\,.}
For a stationary process, where
Q
{\displaystyle Q}
does not depend on time t, this differential equation can be solved. First,
P
(
t
)
=
exp
(
t
Q
)
,
{\displaystyle P(t)=\exp(tQ),}
where
exp
(
t
Q
)
{\displaystyle \exp(tQ)}
denotes the exponential of the matrix
t
Q
{\displaystyle tQ}
. As a result,
p
(
t
)
=
p
(
0
)
P
(
t
)
=
p
(
0
)
exp
(
t
Q
)
.
{\displaystyle \mathbf {p} (t)=\mathbf {p} (0)P(t)=\mathbf {p} (0)\exp(tQ)\,.}
=== Ergodicity ===
If the Markov chain is irreducible, i.e. if it is always possible to go from a state
x
{\displaystyle x}
to a state
y
{\displaystyle y}
(possibly in several steps), then it is also ergodic. As a result, it has a unique stationary distribution
π
=
{
π
x
,
x
∈
E
}
{\displaystyle {\boldsymbol {\pi }}=\{\pi _{x},\,x\in {\mathcal {E}}\}}
, where
π
x
{\displaystyle \pi _{x}}
corresponds to the proportion of time spent in state
x
{\displaystyle x}
after the Markov chain has run for an infinite amount of time. In DNA evolution, under the assumption of a common process for each site, the stationary frequencies
π
A
,
π
G
,
π
C
,
π
T
{\displaystyle \pi _{A},\,\pi _{G},\,\pi _{C},\,\pi _{T}}
correspond to equilibrium base compositions. Indeed, note that since the stationary distribution
π
{\displaystyle {\boldsymbol {\pi }}}
satisfies
π
Q
=
0
{\displaystyle {\boldsymbol {\pi }}Q=0}
, we see that when the current distribution
p
(
t
)
{\displaystyle \mathbf {p} (t)}
is the stationary distribution
π
{\displaystyle {\boldsymbol {\pi }}}
we have
p
′
(
t
)
=
p
(
t
)
Q
=
π
Q
=
0
.
{\displaystyle {\mathbf {p} '(t)=\mathbf {p} (t)Q={\boldsymbol {\pi }}}Q=0\,.}
In other words, the frequencies of
p
A
(
t
)
,
p
G
(
t
)
,
p
C
(
t
)
,
p
T
(
t
)
{\displaystyle p_{A}(t),\,p_{G}(t),\,p_{C}(t),\,p_{T}(t)}
do not change.
=== Time reversibility ===
Definition: A stationary Markov process is time reversible if (in the steady state) the amount of change from state
x
{\displaystyle x\ }
to
y
{\displaystyle y\ }
is equal to the amount of change from
y
{\displaystyle y\ }
to
x
{\displaystyle x\ }
, (although the two states may occur with different frequencies). This means that:
π
x
μ
x
y
=
π
y
μ
y
x
{\displaystyle \pi _{x}\mu _{xy}=\pi _{y}\mu _{yx}\ }
Not all stationary processes are reversible, however, most commonly used DNA evolution models assume time reversibility, which is considered to be a reasonable assumption.
Under the time reversibility assumption, let
s
x
y
=
μ
x
y
/
π
y
{\displaystyle s_{xy}=\mu _{xy}/\pi _{y}\ }
, then it is easy to see that:
s
x
y
=
s
y
x
{\displaystyle s_{xy}=s_{yx}\ }
Definition The symmetric term
s
x
y
{\displaystyle s_{xy}\ }
is called the exchangeability between states
x
{\displaystyle x\ }
and
y
{\displaystyle y\ }
. In other words,
s
x
y
{\displaystyle s_{xy}\ }
is the fraction of the frequency of state
x
{\displaystyle x\ }
that is the result of transitions from state
y
{\displaystyle y\ }
to state
x
{\displaystyle x\ }
.
Corollary The 12 off-diagonal entries of the rate matrix,
Q
{\displaystyle Q\ }
(note the off-diagonal entries determine the diagonal entries, since the rows of
Q
{\displaystyle Q\ }
sum to zero) can be completely determined by 9 numbers; these are: 6 exchangeability terms and 3 stationary frequencies
π
x
{\displaystyle \pi _{x}\ }
, (since the stationary frequencies sum to 1).
=== Scaling of branch lengths ===
By comparing extant sequences, one can determine the amount of sequence divergence. This raw measurement of divergence provides information about the number of changes that have occurred along the path separating the sequences. The simple count of differences (the Hamming distance) between sequences will often underestimate the number of substitution because of multiple hits (see homoplasy). Trying to estimate the exact number of changes that have occurred is difficult, and usually not necessary. Instead, branch lengths (and path lengths) in phylogenetic analyses are usually expressed in the expected number of changes per site. The path length is the product of the duration of the path in time and the mean rate of substitutions. While their product can be estimated, the rate and time are not identifiable from sequence divergence.
The descriptions of rate matrices on this page accurately reflect the relative magnitude of different substitutions, but these rate matrices are not scaled such that a branch length of 1 yields one expected change. This scaling can be accomplished by multiplying every element of the matrix by the same factor, or simply by scaling the branch lengths. If we use the β to denote the scaling factor, and ν to denote the branch length measured in the expected number of substitutions per site then βν is used in the transition probability formulae below in place of μt. Note that ν is a parameter to be estimated from data, and is referred to as the branch length, while β is simply a number that can be calculated from the rate matrix (it is not a separate free parameter).
The value of β can be found by forcing the expected rate of flux of states to 1. The diagonal entries of the rate-matrix (the Q matrix) represent -1 times the rate of leaving each state. For time-reversible models, we know the equilibrium state frequencies (these are simply the πi parameter value for state i). Thus we can find the expected rate of change by calculating the sum of flux out of each state weighted by the proportion of sites that are expected to be in that class. Setting β to be the reciprocal of this sum will guarantee that scaled process has an expected flux of 1:
β
=
1
/
(
−
∑
i
π
i
μ
i
i
)
{\displaystyle \beta =1/\left(-\sum _{i}\pi _{i}\mu _{ii}\right)}
For example, in the Jukes–Cantor, the scaling factor would be 4/(3μ) because the rate of leaving each state is 3μ/4.
== Most common models of DNA evolution ==
=== JC69 model (Jukes and Cantor 1969) ===
JC69, the Jukes and Cantor 1969 model, is the simplest substitution model. There are several assumptions. It assumes equal base frequencies
(
π
A
=
π
G
=
π
C
=
π
T
=
1
4
)
{\displaystyle \left(\pi _{A}=\pi _{G}=\pi _{C}=\pi _{T}={1 \over 4}\right)}
and equal mutation rates. The only parameter of this model is therefore
μ
{\displaystyle \mu }
, the overall substitution rate. As previously mentioned, this variable becomes a constant when we normalize the mean-rate to 1.
Q
=
(
∗
μ
4
μ
4
μ
4
μ
4
∗
μ
4
μ
4
μ
4
μ
4
∗
μ
4
μ
4
μ
4
μ
4
∗
)
{\displaystyle Q={\begin{pmatrix}{*}&{\mu \over 4}&{\mu \over 4}&{\mu \over 4}\\{\mu \over 4}&{*}&{\mu \over 4}&{\mu \over 4}\\{\mu \over 4}&{\mu \over 4}&{*}&{\mu \over 4}\\{\mu \over 4}&{\mu \over 4}&{\mu \over 4}&{*}\end{pmatrix}}}
P
=
(
1
4
+
3
4
e
−
t
μ
1
4
−
1
4
e
−
t
μ
1
4
−
1
4
e
−
t
μ
1
4
−
1
4
e
−
t
μ
1
4
−
1
4
e
−
t
μ
1
4
+
3
4
e
−
t
μ
1
4
−
1
4
e
−
t
μ
1
4
−
1
4
e
−
t
μ
1
4
−
1
4
e
−
t
μ
1
4
−
1
4
e
−
t
μ
1
4
+
3
4
e
−
t
μ
1
4
−
1
4
e
−
t
μ
1
4
−
1
4
e
−
t
μ
1
4
−
1
4
e
−
t
μ
1
4
−
1
4
e
−
t
μ
1
4
+
3
4
e
−
t
μ
)
{\displaystyle P={\begin{pmatrix}{{1 \over 4}+{3 \over 4}e^{-t\mu }}&{{1 \over 4}-{1 \over 4}e^{-t\mu }}&{{1 \over 4}-{1 \over 4}e^{-t\mu }}&{{1 \over 4}-{1 \over 4}e^{-t\mu }}\\\\{{1 \over 4}-{1 \over 4}e^{-t\mu }}&{{1 \over 4}+{3 \over 4}e^{-t\mu }}&{{1 \over 4}-{1 \over 4}e^{-t\mu }}&{{1 \over 4}-{1 \over 4}e^{-t\mu }}\\\\{{1 \over 4}-{1 \over 4}e^{-t\mu }}&{{1 \over 4}-{1 \over 4}e^{-t\mu }}&{{1 \over 4}+{3 \over 4}e^{-t\mu }}&{{1 \over 4}-{1 \over 4}e^{-t\mu }}\\\\{{1 \over 4}-{1 \over 4}e^{-t\mu }}&{{1 \over 4}-{1 \over 4}e^{-t\mu }}&{{1 \over 4}-{1 \over 4}e^{-t\mu }}&{{1 \over 4}+{3 \over 4}e^{-t\mu }}\end{pmatrix}}}
When branch length,
ν
{\displaystyle \nu }
, is measured in the expected number of changes per site then:
P
i
j
(
ν
)
=
{
1
4
+
3
4
e
−
4
ν
/
3
if
i
=
j
1
4
−
1
4
e
−
4
ν
/
3
if
i
≠
j
{\displaystyle P_{ij}(\nu )=\left\{{\begin{array}{cc}{1 \over 4}+{3 \over 4}e^{-4\nu /3}&{\mbox{ if }}i=j\\{1 \over 4}-{1 \over 4}e^{-4\nu /3}&{\mbox{ if }}i\neq j\end{array}}\right.}
It is worth noticing that
ν
=
3
4
t
μ
=
(
μ
4
+
μ
4
+
μ
4
)
t
{\displaystyle \nu ={3 \over 4}t\mu =({\mu \over 4}+{\mu \over 4}+{\mu \over 4})t}
what stands for sum of any column (or row) of matrix
Q
{\displaystyle Q}
multiplied by time and thus means expected number of substitutions in time
t
{\displaystyle t}
(branch duration) for each particular site (per site) when the rate of substitution equals
μ
{\displaystyle \mu }
.
Given the proportion
p
{\displaystyle p}
of sites that differ between the two sequences the Jukes–Cantor estimate of the evolutionary distance (in terms of the expected number of changes) between two sequences is given by
d
^
=
−
3
4
ln
(
1
−
4
3
p
)
=
ν
^
{\displaystyle {\hat {d}}=-{3 \over 4}\ln({1-{4 \over 3}p})={\hat {\nu }}}
The
p
{\displaystyle p}
in this formula is frequently referred to as the
p
{\displaystyle p}
-distance. It is a sufficient statistic for calculating the Jukes–Cantor distance correction, but is not sufficient for the calculation of the evolutionary distance under the more complex models that follow (also note that
p
{\displaystyle p}
used in subsequent formulae is not identical to the "
p
{\displaystyle p}
-distance").
=== K80 model (Kimura 1980) ===
K80, the Kimura 1980 model, often referred to as Kimura's two parameter model (or the K2P model), distinguishes between transitions (
A
↔
G
{\displaystyle A\leftrightarrow G}
, i.e. from purine to purine, or
C
↔
T
{\displaystyle C\leftrightarrow T}
, i.e. from pyrimidine to pyrimidine) and transversions (from purine to pyrimidine or vice versa). In Kimura's original description of the model the α and β were used to denote the rates of these types of substitutions, but it is now more common to set the rate of transversions to 1 and use κ to denote the transition/transversion rate ratio (as is done below). The K80 model assumes that all of the bases are equally frequent (
π
A
=
π
G
=
π
C
=
π
T
=
1
4
{\displaystyle \pi _{A}=\pi _{G}=\pi _{C}=\pi _{T}={1 \over 4}}
).
Rate matrix
Q
=
(
∗
κ
1
1
κ
∗
1
1
1
1
∗
κ
1
1
κ
∗
)
{\displaystyle Q={\begin{pmatrix}{*}&{\kappa }&{1}&{1}\\{\kappa }&{*}&{1}&{1}\\{1}&{1}&{*}&{\kappa }\\{1}&{1}&{\kappa }&{*}\end{pmatrix}}}
with columns corresponding to
A
{\displaystyle A}
,
G
{\displaystyle G}
,
C
{\displaystyle C}
, and
T
{\displaystyle T}
, respectively.
The Kimura two-parameter distance is given by:
K
=
−
1
2
ln
(
(
1
−
2
p
−
q
)
1
−
2
q
)
{\displaystyle K=-{1 \over 2}\ln((1-2p-q){\sqrt {1-2q}})}
where p is the proportion of sites that show transitional differences and
q is the proportion of sites that show transversional differences.
=== K81 model (Kimura 1981) ===
K81, the Kimura 1981 model, often called Kimura's three parameter model (K3P model) or the Kimura three substitution type (K3ST) model, has distinct rates for transitions and two distinct types of transversions. The two transversion types are those that conserve the weak/strong properties of the nucleotides (i.e.,
A
↔
T
{\displaystyle A\leftrightarrow T}
and
C
↔
G
{\displaystyle C\leftrightarrow G}
, denoted by symbol
γ
{\displaystyle \gamma }
) and those that conserve the amino/keto properties of the nucleotides (i.e.,
A
↔
C
{\displaystyle A\leftrightarrow C}
and
G
↔
T
{\displaystyle G\leftrightarrow T}
, denoted by symbol
β
{\displaystyle \beta }
). The K81 model assumes that all equilibrium base frequencies are equal (i.e.,
π
A
=
π
G
=
π
C
=
π
T
=
0.25
{\displaystyle \pi _{A}=\pi _{G}=\pi _{C}=\pi _{T}=0.25}
).
Rate matrix
Q
=
(
∗
α
β
γ
α
∗
γ
β
β
γ
∗
α
γ
β
α
∗
)
{\displaystyle Q={\begin{pmatrix}{*}&{\alpha }&{\beta }&{\gamma }\\{\alpha }&{*}&{\gamma }&{\beta }\\{\beta }&{\gamma }&{*}&{\alpha }\\{\gamma }&{\beta }&{\alpha }&{*}\end{pmatrix}}}
with columns corresponding to
A
{\displaystyle A}
,
G
{\displaystyle G}
,
C
{\displaystyle C}
, and
T
{\displaystyle T}
, respectively.
The K81 model is used much less often than the K80 (K2P) model for distance estimation and it is seldom the best-fitting model in maximum likelihood phylogenetics. Despite these facts, the K81 model has continued to be studied in the context of mathematical phylogenetics. One important property is the ability to perform a Hadamard transform assuming the site patterns were generated on a tree with nucleotides evolving under the K81 model.
When used in the context of phylogenetics the Hadamard transform provides an elegant and fully invertible means to calculate expected site pattern frequencies given a set of branch lengths (or vice versa). Unlike many maximum likelihood calculations, the relative values for
α
{\displaystyle \alpha }
,
β
{\displaystyle \beta }
, and
γ
{\displaystyle \gamma }
can vary across branches and the Hadamard transform can even provide evidence that the data do not fit a tree. The Hadamard transform can also be combined with a wide variety of methods to accommodate among-sites rate heterogeneity, using continuous distributions rather than the discrete approximations typically used in maximum likelihood phylogenetics (although one must sacrifice the invertibility of the Hadamard transform to use certain among-sites rate heterogeneity distributions).
=== F81 model (Felsenstein 1981) ===
F81, the Felsenstein's 1981 model, is an extension of the JC69 model in which base frequencies are allowed to vary from 0.25 (
π
A
≠
π
G
≠
π
C
≠
π
T
{\displaystyle \pi _{A}\neq \pi _{G}\neq \pi _{C}\neq \pi _{T}}
)
Rate matrix:
Q
=
(
∗
π
G
π
C
π
T
π
A
∗
π
C
π
T
π
A
π
G
∗
π
T
π
A
π
G
π
C
∗
)
{\displaystyle Q={\begin{pmatrix}{*}&{\pi _{G}}&{\pi _{C}}&{\pi _{T}}\\{\pi _{A}}&{*}&{\pi _{C}}&{\pi _{T}}\\{\pi _{A}}&{\pi _{G}}&{*}&{\pi _{T}}\\{\pi _{A}}&{\pi _{G}}&{\pi _{C}}&{*}\end{pmatrix}}}
When branch length, ν, is measured in the expected number of changes per site then:
β
=
1
/
(
1
−
π
A
2
−
π
C
2
−
π
G
2
−
π
T
2
)
{\displaystyle \beta =1/(1-\pi _{A}^{2}-\pi _{C}^{2}-\pi _{G}^{2}-\pi _{T}^{2})}
P
i
j
(
ν
)
=
{
e
−
β
ν
+
π
j
(
1
−
e
−
β
ν
)
if
i
=
j
π
j
(
1
−
e
−
β
ν
)
if
i
≠
j
{\displaystyle P_{ij}(\nu )=\left\{{\begin{array}{cc}e^{-\beta \nu }+\pi _{j}\left(1-e^{-\beta \nu }\right)&{\mbox{ if }}i=j\\\pi _{j}\left(1-e^{-\beta \nu }\right)&{\mbox{ if }}i\neq j\end{array}}\right.}
=== HKY85 model (Hasegawa, Kishino and Yano 1985) ===
HKY85, the Hasegawa, Kishino and Yano 1985 model, can be thought of as combining the extensions made in the Kimura80 and Felsenstein81 models. Namely, it distinguishes between the rate of transitions and transversions (using the κ parameter), and it allows unequal base frequencies (
π
A
≠
π
G
≠
π
C
≠
π
T
{\displaystyle \pi _{A}\neq \pi _{G}\neq \pi _{C}\neq \pi _{T}}
). [ Felsenstein described a similar (but not equivalent) model in 1984 using a different parameterization; that latter model is referred to as the F84 model. ]
Rate matrix
Q
=
(
∗
κ
π
G
π
C
π
T
κ
π
A
∗
π
C
π
T
π
A
π
G
∗
κ
π
T
π
A
π
G
κ
π
C
∗
)
{\displaystyle Q={\begin{pmatrix}{*}&{\kappa \pi _{G}}&{\pi _{C}}&{\pi _{T}}\\{\kappa \pi _{A}}&{*}&{\pi _{C}}&{\pi _{T}}\\{\pi _{A}}&{\pi _{G}}&{*}&{\kappa \pi _{T}}\\{\pi _{A}}&{\pi _{G}}&{\kappa \pi _{C}}&{*}\end{pmatrix}}}
If we express the branch length, ν in terms of the expected number of changes per site then:
β
=
1
2
(
π
A
+
π
G
)
(
π
C
+
π
T
)
+
2
κ
[
(
π
A
π
G
)
+
(
π
C
π
T
)
]
{\displaystyle \beta ={\frac {1}{2(\pi _{A}+\pi _{G})(\pi _{C}+\pi _{T})+2\kappa [(\pi _{A}\pi _{G})+(\pi _{C}\pi _{T})]}}}
P
A
A
(
ν
,
κ
,
π
)
=
[
π
A
(
π
A
+
π
G
+
(
π
C
+
π
T
)
e
−
β
ν
)
+
π
G
e
−
(
1
+
(
π
A
+
π
G
)
(
κ
−
1.0
)
)
β
ν
]
/
(
π
A
+
π
G
)
{\displaystyle P_{AA}(\nu ,\kappa ,\pi )=\left[\pi _{A}\left(\pi _{A}+\pi _{G}+(\pi _{C}+\pi _{T})e^{-\beta \nu }\right)+\pi _{G}e^{-(1+(\pi _{A}+\pi _{G})(\kappa -1.0))\beta \nu }\right]/(\pi _{A}+\pi _{G})}
P
A
C
(
ν
,
κ
,
π
)
=
π
C
(
1.0
−
e
−
β
ν
)
{\displaystyle P_{AC}(\nu ,\kappa ,\pi )=\pi _{C}\left(1.0-e^{-\beta \nu }\right)}
P
A
G
(
ν
,
κ
,
π
)
=
[
π
G
(
π
A
+
π
G
+
(
π
C
+
π
T
)
e
−
β
ν
)
−
π
G
e
−
(
1
+
(
π
A
+
π
G
)
(
κ
−
1.0
)
)
β
ν
]
/
(
π
A
+
π
G
)
{\displaystyle P_{AG}(\nu ,\kappa ,\pi )=\left[\pi _{G}\left(\pi _{A}+\pi _{G}+(\pi _{C}+\pi _{T})e^{-\beta \nu }\right)-\pi _{G}e^{-(1+(\pi _{A}+\pi _{G})(\kappa -1.0))\beta \nu }\right]/\left(\pi _{A}+\pi _{G}\right)}
P
A
T
(
ν
,
κ
,
π
)
=
π
T
(
1.0
−
e
−
β
ν
)
{\displaystyle P_{AT}(\nu ,\kappa ,\pi )=\pi _{T}\left(1.0-e^{-\beta \nu }\right)}
and formula for the other combinations of states can be obtained by substituting in the appropriate base frequencies.
=== T92 model (Tamura 1992) ===
T92, the Tamura 1992 model, is a mathematical method developed to estimate the number of nucleotide substitutions per site between two DNA sequences, by extending Kimura's (1980) two-parameter method to the case where a G+C content bias exists. This method will be useful when there are strong transition-transversion and G+C-content biases, as in the case of Drosophila mitochondrial DNA.
T92 involves a single, compound base frequency parameter
θ
∈
(
0
,
1
)
{\displaystyle \theta \in (0,1)}
(also noted
π
G
C
{\displaystyle \pi _{GC}}
)
=
π
G
+
π
C
=
1
−
(
π
A
+
π
T
)
{\displaystyle =\pi _{G}+\pi _{C}=1-(\pi _{A}+\pi _{T})}
As T92 echoes the Chargaff's second parity rule — pairing nucleotides do have the same frequency on a single DNA strand, G and C on the one hand, and A and T on the other hand — it follows that the four base frequences can be expressed as a function of
π
G
C
{\displaystyle \pi _{GC}}
π
G
=
π
C
=
π
G
C
2
{\displaystyle \pi _{G}=\pi _{C}={\pi _{GC} \over 2}}
and
π
A
=
π
T
=
(
1
−
π
G
C
)
2
{\displaystyle \pi _{A}=\pi _{T}={(1-\pi _{GC}) \over 2}}
Rate matrix
Q
=
(
∗
κ
π
G
C
/
2
π
G
C
/
2
(
1
−
π
G
C
)
/
2
κ
(
1
−
π
G
C
)
/
2
∗
π
G
C
/
2
(
1
−
π
G
C
)
/
2
(
1
−
π
G
C
)
/
2
π
G
C
/
2
∗
κ
(
1
−
π
G
C
)
/
2
(
1
−
π
G
C
)
/
2
π
G
C
/
2
κ
π
G
C
/
2
∗
)
{\displaystyle Q={\begin{pmatrix}{*}&{\kappa \pi _{GC}/2}&{\pi _{GC}/2}&{(1-\pi _{GC})/2}\\{\kappa (1-\pi _{GC})/2}&{*}&{\pi _{GC}/2}&{(1-\pi _{GC})/2}\\{(1-\pi _{GC})/2}&{\pi _{GC}/2}&{*}&{\kappa (1-\pi _{GC})/2}\\{(1-\pi _{GC})/2}&{\pi _{GC}/2}&{\kappa \pi _{GC}/2}&{*}\end{pmatrix}}}
The evolutionary distance between two DNA sequences according to this model is given by
d
=
−
h
ln
(
1
−
p
h
−
q
)
−
1
2
(
1
−
h
)
ln
(
1
−
2
q
)
{\displaystyle d=-h\ln(1-{p \over h}-q)-{1 \over 2}(1-h)\ln(1-2q)}
where
h
=
2
θ
(
1
−
θ
)
{\displaystyle h=2\theta (1-\theta )}
and
θ
{\displaystyle \theta }
is the G+C content (
π
G
C
=
π
G
+
π
C
{\displaystyle \pi _{GC}=\pi _{G}+\pi _{C}}
).
=== TN93 model (Tamura and Nei 1993) ===
TN93, the Tamura and Nei 1993 model, distinguishes between the two different types of transition; i.e. (
A
↔
G
{\displaystyle A\leftrightarrow G}
) is allowed to have a different rate to (
C
↔
T
{\displaystyle C\leftrightarrow T}
). Transversions are all assumed to occur at the same rate, but that rate is allowed to be different from both of the rates for transitions.
TN93 also allows unequal base frequencies (
π
A
≠
π
G
≠
π
C
≠
π
T
{\displaystyle \pi _{A}\neq \pi _{G}\neq \pi _{C}\neq \pi _{T}}
).
Rate matrix
Q
=
(
∗
κ
1
π
G
π
C
π
T
κ
1
π
A
∗
π
C
π
T
π
A
π
G
∗
κ
2
π
T
π
A
π
G
κ
2
π
C
∗
)
{\displaystyle Q={\begin{pmatrix}{*}&{\kappa _{1}\pi _{G}}&{\pi _{C}}&{\pi _{T}}\\{\kappa _{1}\pi _{A}}&{*}&{\pi _{C}}&{\pi _{T}}\\{\pi _{A}}&{\pi _{G}}&{*}&{\kappa _{2}\pi _{T}}\\{\pi _{A}}&{\pi _{G}}&{\kappa _{2}\pi _{C}}&{*}\end{pmatrix}}}
=== GTR model (Tavaré 1986) ===
GTR, the Generalised time-reversible model of Tavaré 1986, is the most general neutral, independent, finite-sites, time-reversible model possible. It was first described in a general form by Simon Tavaré in 1986.
GTR parameters consist of an equilibrium base frequency vector,
Π
=
(
π
A
,
π
G
,
π
C
,
π
T
)
{\displaystyle \Pi =(\pi _{A},\pi _{G},\pi _{C},\pi _{T})}
, giving the frequency at which each base occurs at each site, and the rate matrix
Q
=
(
−
(
α
π
G
+
β
π
C
+
γ
π
T
)
α
π
G
β
π
C
γ
π
T
α
π
A
−
(
α
π
A
+
δ
π
C
+
ϵ
π
T
)
δ
π
C
ϵ
π
T
β
π
A
δ
π
G
−
(
β
π
A
+
δ
π
G
+
η
π
T
)
η
π
T
γ
π
A
ϵ
π
G
η
π
C
−
(
γ
π
A
+
ϵ
π
G
+
η
π
C
)
)
{\displaystyle Q={\begin{pmatrix}{-(\alpha \pi _{G}+\beta \pi _{C}+\gamma \pi _{T})}&{\alpha \pi _{G}}&{\beta \pi _{C}}&{\gamma \pi _{T}}\\{\alpha \pi _{A}}&{-(\alpha \pi _{A}+\delta \pi _{C}+\epsilon \pi _{T})}&{\delta \pi _{C}}&{\epsilon \pi _{T}}\\{\beta \pi _{A}}&{\delta \pi _{G}}&{-(\beta \pi _{A}+\delta \pi _{G}+\eta \pi _{T})}&{\eta \pi _{T}}\\{\gamma \pi _{A}}&{\epsilon \pi _{G}}&{\eta \pi _{C}}&{-(\gamma \pi _{A}+\epsilon \pi _{G}+\eta \pi _{C})}\end{pmatrix}}}
Where
α
=
r
(
A
→
G
)
=
r
(
G
→
A
)
β
=
r
(
A
→
C
)
=
r
(
C
→
A
)
γ
=
r
(
A
→
T
)
=
r
(
T
→
A
)
δ
=
r
(
G
→
C
)
=
r
(
C
→
G
)
ϵ
=
r
(
G
→
T
)
=
r
(
T
→
G
)
η
=
r
(
C
→
T
)
=
r
(
T
→
C
)
{\displaystyle {\begin{aligned}\alpha =r(A\rightarrow G)=r(G\rightarrow A)\\\beta =r(A\rightarrow C)=r(C\rightarrow A)\\\gamma =r(A\rightarrow T)=r(T\rightarrow A)\\\delta =r(G\rightarrow C)=r(C\rightarrow G)\\\epsilon =r(G\rightarrow T)=r(T\rightarrow G)\\\eta =r(C\rightarrow T)=r(T\rightarrow C)\end{aligned}}}
are the transition rate parameters.
Therefore, GTR (for four characters, as is often the case in phylogenetics) requires 6 substitution rate parameters, as well as 4 equilibrium base frequency parameters. However, this is usually eliminated down to 9 parameters plus
μ
{\displaystyle \mu }
, the overall number of substitutions per unit time. When measuring time in substitutions (
μ
{\displaystyle \mu }
=1) only 8 free parameters remain.
In general, to compute the number of parameters, one must count the number of entries above the diagonal in the matrix, i.e. for n trait values per site
n
2
−
n
2
{\displaystyle {{n^{2}-n} \over 2}}
, and then add n for the equilibrium base frequencies, and subtract 1 because
μ
{\displaystyle \mu }
is fixed. One gets
n
2
−
n
2
+
n
−
1
=
1
2
n
2
+
1
2
n
−
1.
{\displaystyle {{n^{2}-n} \over 2}+n-1={1 \over 2}n^{2}+{1 \over 2}n-1.}
For example, for an amino acid sequence (there are 20 "standard" amino acids that make up proteins), one would find there are 209 parameters. However, when studying coding regions of the genome, it is more common to work with a codon substitution model (a codon is three bases and codes for one amino acid in a protein). There are
4
3
=
64
{\displaystyle 4^{3}=64}
codons, but the rates for transitions between codons which differ by more than one base is assumed to be zero. Hence, there are
20
×
19
×
3
2
+
64
−
1
=
633
{\displaystyle {{20\times 19\times 3} \over 2}+64-1=633}
parameters.
== Two-state substitution models ==
An alternative way to analyze DNA sequence data is to recode the nucleotides as purines (R) and pyrimidines (Y); this practice is often called RY-coding. Insertions and deletions in multiple sequence alignments can also be encoded as binary data and analyzed in using a two-state model.
The simplest two-state model of sequence evolution is called the Cavender-Farris model or the Cavender-Farris-Neyman (CFN) model; the name of this model reflects the fact that it was described independently in several different publications. The CFN model is identical to the Jukes-Cantor model adapted to two states and it has even been implemented as the "JC2" model in the popular IQ-TREE software package (using this model in IQ-TREE requires coding the data as 0 and 1 rather than R and Y; the popular PAUP* software package can interpret a data matrix comprising only R and Y as data to be analyzed using the CFN model). It is also straightforward to analyze binary data using the phylogenetic Hadamard transform. The alternative two-state model allows the equilibrium frequency parameters of R and Y (or 0 and 1) to take on values other than 0.5 by adding single free parameter; this model is variously called CFu or GTR2 (in IQ-TREE).
Other recoding methods are WS (weak-strong) and MK (amino-keto).
== Lie Markov models ==
Lie Markov models are, from a mathematical point of view, Markov models that form a Lie algebra. For the mathematician, this makes them closed under matrix multiplication. From a phylogeneticist's point of view, these models have the benefit of being able to add or remove taxa without affecting the site patterns that the model can generate over the remaining taxa. There is also a natural hierarchy of models based on how many parameters can be changed. Some exsiting models such as JC and F81 are already Lie Markov models, while GTR is not. Lie Markov models (with RY, WS, or MK) are available in IQ-TREE.
== See also ==
Molecular evolution
Molecular clock
UPGMA
== References ==
=== Further reading ===
== External links ==
DAWG: DNA Assembly With Gaps — free software for simulating sequence evolution | Wikipedia/Evolutionary_model |
In mathematics, the Gauss map (also known as Gaussian map or mouse map), is a nonlinear iterated map of the reals into a real interval given by the Gaussian function:
x
n
+
1
=
exp
(
−
α
x
n
2
)
+
β
,
{\displaystyle x_{n+1}=\exp(-\alpha x_{n}^{2})+\beta ,\,}
where α and β are real parameters.
Named after Johann Carl Friedrich Gauss, the function maps the bell shaped Gaussian function similar to the logistic map.
== Properties ==
In the parameter real space
x
n
{\displaystyle x_{n}}
can be chaotic. The map is also called the mouse map because its bifurcation diagram resembles a mouse (see Figures).
== References == | Wikipedia/Gauss_iterated_map |
Society for Industrial and Applied Mathematics (SIAM) is a professional society dedicated to applied mathematics, computational science, and data science through research, publications, and community. SIAM is the world's largest scientific society devoted to applied mathematics, and roughly two-thirds of its membership resides within the United States. Founded in 1951, the organization began holding annual national meetings in 1954, and now hosts conferences, publishes books and scholarly journals, and engages in advocacy in issues of interest to its membership. Members include engineers, scientists, and mathematicians, both those employed in academia and those working in industry. The society supports educational institutions promoting applied mathematics.
SIAM is one of the four member organizations of the Joint Policy Board for Mathematics.
== Membership ==
Membership is open to both individuals and organizations. By the end of its first full year of operation, SIAM had 130 members; by 1968, it had 3,700.
Student members can join SIAM chapters affiliated and run by students and faculty at universities. Most universities with SIAM chapters are in the United States (including Harvard and MIT), but SIAM chapters also exist in other countries, for example at Oxford, at the École Polytechnique Fédérale de Lausanne and at Peking University. SIAM publishes the SIAM Undergraduate Research Online, a venue for undergraduate research in applied and computational mathematics. (SIAM also offers the SIAM Visiting Lecture Program, which helps arrange visits from industrial mathematicians to speak to student groups about applied mathematics and their own professional experiences.)
In 2009, SIAM instituted a Fellows program to recognize certain members who have made outstanding contributions to the fields that SIAM serves.
== Activity groups ==
The society includes a number of activity groups (SIAGs) to allow for more focused group discussions and collaborations. Activity groups organize domain-specific conferences and minisymposia, and award prizes.
Unlike special interest groups in similar academic associations like ACM, activity groups are chartered for a fixed period of time, typically for two years, and require submitting a petition to the SIAM Council and Board for renewal. Charter approval is largely based on group size, as topics that were considered hot at one time may have fewer active researchers later.
Current Activity Groups:
== Publications ==
=== Journals ===
SIAM publishes 18 research journals:
SIAM Journal on Applied Mathematics (SIAP), since 1966
formerly Journal of the Society for Industrial and Applied Mathematics, since 1953
Theory of Probability and Its Applications (TVP), since 1956
translation of Teoriya Veroyatnostei i ee Primeneniya
SIAM Review (SIREV), since 1959
SIAM Journal on Control and Optimization (SICON), since 1976
formerly SIAM Journal on Control, since 1966
formerly Journal of the Society for Industrial and Applied Mathematics, Series A: Control, since 1962
SIAM Journal on Numerical Analysis (SINUM), since 1966
formerly Journal of the Society for Industrial and Applied Mathematics, Series B: Numerical Analysis, since 1964
SIAM Journal on Mathematical Analysis (SIMA), since 1970
SIAM Journal on Computing (SICOMP), since 1972
SIAM Journal on Matrix Analysis and Applications (SIMAX), since 1988
formerly SIAM Journal on Algebraic and Discrete Methods, since 1980
SIAM Journal on Scientific Computing (SISC), since 1993
formerly SIAM Journal on Scientific and Statistical Computing, since 1980
SIAM Journal on Discrete Mathematics (SIDMA), since 1988
SIAM Journal on Optimization (SIOPT), since 1991
SIAM Journal on Applied Dynamical Systems (SIADS), since 2002
Multiscale Modeling and Simulation (MMS), since 2003
SIAM Journal on Imaging Sciences (SIIMS), since 2008
SIAM Journal on Financial Mathematics (SIFIN), since 2010
SIAM/ASA Journal on Uncertainty Quantification (JUQ), since 2013
SIAM Journal on Applied Algebra and Geometry (SIAGA), since 2017
SIAM Journal on Mathematics of Data Science (SIMODS), since 2018
=== Books ===
SIAM publishes roughly 20 books each year, including textbooks, conference proceedings and monographs. Many of these are issued in themed series, such as "Advances in design and control", "Financial mathematics" and "Monographs on discrete mathematics and applications". In particular, SIAM distributes books produced by Gilbert Strang's Wellesley-Cambridge Press, such as his Introduction to Linear Algebra (5th edition, 2016). Organizations such as libraries can obtain DRM-free access to SIAM books in eBook format for a subscription fee.
=== Conferences ===
SIAM organizes conferences and meetings throughout the year focused on various topics in applied math and computational science. For example, SIAM has hosted an annual conference on data mining since 2001. The establishment of the SIAM Conferences on Discrete Mathematics, held every two years, has been regarded as a sign of the growth of graph theory as a prominent topic of study. The International Meshing Roundtable is an annual conference that was independently established in 1992 and joined SIAM as a workshop in 2022.
In conjunction with the Association for Computing Machinery, SIAM also organizes the annual Symposium on Discrete Algorithms, using the format of a theoretical computer science conference rather than the mathematics conference format that SIAM typically uses for its conferences.
== Prizes and recognition ==
SIAM recognizes applied mathematician and computational scientists for their contributions to the fields. Prizes include:
Germund Dahlquist Prize: Awarded to a young scientist (normally under 45) for original contributions to fields associated with Germund Dahlquist (numerical solution of differential equations and numerical methods for scientific computing).
Ralph E. Kleinman Prize: Awarded for "outstanding research, or other contributions, that bridge the gap between mathematics and applications...Each prize may be given either for a single notable achievement or for a collection of such achievements."
J.D. Crawford Prize: Awarded to "one individual for recent outstanding work on a topic in nonlinear science, as evidenced by a publication in English in a peer-reviewed journal within the four calendar years preceding the meeting at which the prize is awarded"
Jürgen Moser Lecture: Awarded to "a person who has made distinguished contributions to nonlinear science".
Richard C. DiPrima Prize: Awarded to "a young scientist who has done outstanding research in applied mathematics (defined as those topics covered by SIAM journals) and who has completed his/her doctoral dissertation and completed all other requirements for his/her doctorate during the period running from three years prior to the award date to one year prior to the award date".
George Pólya Prize: "is given every two years, alternately in two categories: (1) for a notable application of combinatorial theory; (2) for a notable contribution in another area of interest to George Pólya such as approximation theory, complex analysis, number theory, orthogonal polynomials, probability theory, or mathematical discovery and learning."
W. T. and Idalia Reid Prize: Awarded for research in and contributions to areas of differential equations and control theory.
Theodore von Kármán Prize: Awarded for "notable application of mathematics to mechanics and/or the engineering sciences made during the five to ten years preceding the award".
James H. Wilkinson Prize in Numerical Analysis and Scientific Computing: Awarded for "research in, or other contributions to, numerical analysis and scientific computing during the six years preceding the award".
=== John von Neumann Lecture ===
The John von Neumann Lecture prize was established in 1959 with funds from IBM and other industry corporations, and is awarded for "outstanding and distinguished contributions to the field of applied mathematical sciences and for the effective communication of these ideas to the community". The recipient receives a monetary award and presents a survey lecture at the Annual Meeting.
=== MathWorks Math Modeling (M3) Challenge ===
The MathWorks Math Modeling Challenge is an applied mathematics modeling competition for high school students in the United States. Scholarship prizes totaled $60,000 in 2006, and have since been raised to $150,000. It is funded by Mathworks. Originally, the prize was sponsored by the financial services company Moody's and known as the Moody's Mega Math Challenge.
== Leadership ==
The chief elected officer of SIAM is the president, elected for a single two-year term. SIAM employs an executive director and staff.
The following people have been presidents of the society:
== See also ==
American Mathematical Society
Japan Society for Industrial and Applied Mathematics
Gesellschaft für Angewandte Mathematik und Mechanik
== References ==
== External links ==
Official website
M3Challenge.SIAM.org | Wikipedia/SIAM_Journal_on_Applied_Dynamical_Systems |
In mathematics, specifically in the study of dynamical systems, an orbit is a collection of points related by the evolution function of the dynamical system. It can be understood as the subset of phase space covered by the trajectory of the dynamical system under a particular set of initial conditions, as the system evolves. As a phase space trajectory is uniquely determined for any given set of phase space coordinates, it is not possible for different orbits to intersect in phase space, therefore the set of all orbits of a dynamical system is a partition of the phase space. Understanding the properties of orbits by using topological methods is one of the objectives of the modern theory of dynamical systems.
For discrete-time dynamical systems, the orbits are sequences; for real dynamical systems, the orbits are curves; and for holomorphic dynamical systems, the orbits are Riemann surfaces.
== Definition ==
Given a dynamical system (T, M, Φ) with T a group, M a set and Φ the evolution function
Φ
:
U
→
M
{\displaystyle \Phi :U\to M}
where
U
⊂
T
×
M
{\displaystyle U\subset T\times M}
with
Φ
(
0
,
x
)
=
x
{\displaystyle \Phi (0,x)=x}
we define
I
(
x
)
:=
{
t
∈
T
:
(
t
,
x
)
∈
U
}
,
{\displaystyle I(x):=\{t\in T:(t,x)\in U\},}
then the set
γ
x
:=
{
Φ
(
t
,
x
)
:
t
∈
I
(
x
)
}
⊂
M
{\displaystyle \gamma _{x}:=\{\Phi (t,x):t\in I(x)\}\subset M}
is called the orbit through x. An orbit which consists of a single point is called constant orbit. A non-constant orbit is called closed or periodic if there exists a
t
≠
0
{\displaystyle t\neq 0}
in
I
(
x
)
{\displaystyle I(x)}
such that
Φ
(
t
,
x
)
=
x
{\displaystyle \Phi (t,x)=x}
.
=== Real dynamical system ===
Given a real dynamical system (R, M, Φ), I(x) is an open interval in the real numbers, that is
I
(
x
)
=
(
t
x
−
,
t
x
+
)
{\displaystyle I(x)=(t_{x}^{-},t_{x}^{+})}
. For any x in M
γ
x
+
:=
{
Φ
(
t
,
x
)
:
t
∈
(
0
,
t
x
+
)
}
{\displaystyle \gamma _{x}^{+}:=\{\Phi (t,x):t\in (0,t_{x}^{+})\}}
is called positive semi-orbit through x and
γ
x
−
:=
{
Φ
(
t
,
x
)
:
t
∈
(
t
x
−
,
0
)
}
{\displaystyle \gamma _{x}^{-}:=\{\Phi (t,x):t\in (t_{x}^{-},0)\}}
is called negative semi-orbit through x.
=== Discrete time dynamical system ===
For a discrete time dynamical system with a time-invariant evolution function
f
{\displaystyle f}
:
The forward orbit of x is the set :
γ
x
+
=
d
e
f
{
f
t
(
x
)
:
t
≥
0
}
{\displaystyle \gamma _{x}^{+}\ {\overset {\underset {\mathrm {def} }{}}{=}}\ \{f^{t}(x):t\geq 0\}}
If the function is invertible, the backward orbit of x is the set :
γ
x
−
=
d
e
f
{
f
t
(
x
)
:
t
≤
0
}
{\displaystyle \gamma _{x}^{-}\ {\overset {\underset {\mathrm {def} }{}}{=}}\ \{f^{t}(x):t\leq 0\}}
and orbit of x is the set :
γ
x
=
d
e
f
γ
x
−
∪
γ
x
+
{\displaystyle \gamma _{x}\ {\overset {\underset {\mathrm {def} }{}}{=}}\ \gamma _{x}^{-}\cup \gamma _{x}^{+}}
where :
f
{\displaystyle f}
is the evolution function
f
:
X
→
X
{\displaystyle f:X\to X}
set
X
{\displaystyle X}
is the dynamical space,
t
{\displaystyle t}
is number of iteration, which is natural number and
t
∈
T
{\displaystyle t\in T}
x
{\displaystyle x}
is initial state of system and
x
∈
X
{\displaystyle x\in X}
=== General dynamical system ===
For a general dynamical system, especially in homogeneous dynamics, when one has a "nice" group
G
{\displaystyle G}
acting on a probability space
X
{\displaystyle X}
in a measure-preserving way, an orbit
G
.
x
⊂
X
{\displaystyle G.x\subset X}
will be called periodic (or equivalently, closed) if the stabilizer
S
t
a
b
G
(
x
)
{\displaystyle Stab_{G}(x)}
is a lattice inside
G
{\displaystyle G}
.
In addition, a related term is a bounded orbit, when the set
G
.
x
{\displaystyle G.x}
is pre-compact inside
X
{\displaystyle X}
.
The classification of orbits can lead to interesting questions with relations to other mathematical areas, for example the Oppenheim conjecture (proved by Margulis) and the Littlewood conjecture (partially proved by Lindenstrauss) are dealing with the question whether every bounded orbit of some natural action on the homogeneous space
S
L
3
(
R
)
∖
S
L
3
(
Z
)
{\displaystyle SL_{3}(\mathbb {R} )\backslash SL_{3}(\mathbb {Z} )}
is indeed periodic one, this observation is due to Raghunathan and in different language due to Cassels and Swinnerton-Dyer . Such questions are intimately related to deep measure-classification theorems.
=== Notes ===
It is often the case that the evolution function can be understood to compose the elements of a group, in which case the group-theoretic orbits of the group action are the same thing as the dynamical orbits.
== Examples ==
The orbit of an equilibrium point is a constant orbit.
== Stability of orbits ==
A basic classification of orbits is
constant orbits or fixed points
periodic orbits
non-constant and non-periodic orbits
An orbit can fail to be closed in two ways.
It could be an asymptotically periodic orbit if it converges to a periodic orbit. Such orbits are not closed because they never truly repeat, but they become arbitrarily close to a repeating orbit.
An orbit can also be chaotic. These orbits come arbitrarily close to the initial point, but fail to ever converge to a periodic orbit. They exhibit sensitive dependence on initial conditions, meaning that small differences in the initial value will cause large differences in future points of the orbit.
There are other properties of orbits that allow for different classifications. An orbit can be hyperbolic if nearby points approach or diverge from the orbit exponentially fast.
== See also ==
Wandering set
Phase space method
Phase space
Cobweb plot or Verhulst diagram
Periodic points of complex quadratic mappings and multiplier of orbit
Orbit portrait
== References ==
Hale, Jack K.; Koçak, Hüseyin (1991). "Periodic Orbits". Dynamics and Bifurcations. New York: Springer. pp. 365–388. ISBN 0-387-97141-6.
Katok, Anatole; Hasselblatt, Boris (1996). Introduction to the modern theory of dynamical systems. Cambridge. ISBN 0-521-57557-5.
Perko, Lawrence (2001). "Periodic Orbits, Limit Cycles and Separatrix Cycles". Differential Equations and Dynamical Systems (Third ed.). New York: Springer. pp. 202–211. ISBN 0-387-95116-4. | Wikipedia/Orbit_(dynamics) |
A hash function is any function that can be used to map data of arbitrary size to fixed-size values, though there are some hash functions that support variable-length output. The values returned by a hash function are called hash values, hash codes, (hash/message) digests, or simply hashes. The values are usually used to index a fixed-size table called a hash table. Use of a hash function to index a hash table is called hashing or scatter-storage addressing.
Hash functions and their associated hash tables are used in data storage and retrieval applications to access data in a small and nearly constant time per retrieval. They require an amount of storage space only fractionally greater than the total space required for the data or records themselves. Hashing is a computationally- and storage-space-efficient form of data access that avoids the non-constant access time of ordered and unordered lists and structured trees, and the often-exponential storage requirements of direct access of state spaces of large or variable-length keys.
Use of hash functions relies on statistical properties of key and function interaction: worst-case behavior is intolerably bad but rare, and average-case behavior can be nearly optimal (minimal collision).: 527
Hash functions are related to (and often confused with) checksums, check digits, fingerprints, lossy compression, randomization functions, error-correcting codes, and ciphers. Although the concepts overlap to some extent, each one has its own uses and requirements and is designed and optimized differently. The hash function differs from these concepts mainly in terms of data integrity. Hash tables may use non-cryptographic hash functions, while cryptographic hash functions are used in cybersecurity to secure sensitive data such as passwords.
== Overview ==
In a hash table, a hash function takes a key as an input, which is associated with a datum or record and used to identify it to the data storage and retrieval application. The keys may be fixed-length, like an integer, or variable-length, like a name. In some cases, the key is the datum itself. The output is a hash code used to index a hash table holding the data or records, or pointers to them.
A hash function may be considered to perform three functions:
Convert variable-length keys into fixed-length (usually machine-word-length or less) values, by folding them by words or other units using a parity-preserving operator like ADD or XOR,
Scramble the bits of the key so that the resulting values are uniformly distributed over the keyspace, and
Map the key values into ones less than or equal to the size of the table.
A good hash function satisfies two basic properties: it should be very fast to compute, and it should minimize duplication of output values (collisions). Hash functions rely on generating favorable probability distributions for their effectiveness, reducing access time to nearly constant. High table loading factors, pathological key sets, and poorly designed hash functions can result in access times approaching linear in the number of items in the table. Hash functions can be designed to give the best worst-case performance, good performance under high table loading factors, and in special cases, perfect (collisionless) mapping of keys into hash codes. Implementation is based on parity-preserving bit operations (XOR and ADD), multiply, or divide. A necessary adjunct to the hash function is a collision-resolution method that employs an auxiliary data structure like linked lists, or systematic probing of the table to find an empty slot.
== Hash tables ==
Hash functions are used in conjunction with hash tables to store and retrieve data items or data records. The hash function translates the key associated with each datum or record into a hash code, which is used to index the hash table. When an item is to be added to the table, the hash code may index an empty slot (also called a bucket), in which case the item is added to the table there. If the hash code indexes a full slot, then some kind of collision resolution is required: the new item may be omitted (not added to the table), or replace the old item, or be added to the table in some other location by a specified procedure. That procedure depends on the structure of the hash table. In chained hashing, each slot is the head of a linked list or chain, and items that collide at the slot are added to the chain. Chains may be kept in random order and searched linearly, or in serial order, or as a self-ordering list by frequency to speed up access. In open address hashing, the table is probed starting from the occupied slot in a specified manner, usually by linear probing, quadratic probing, or double hashing until an open slot is located or the entire table is probed (overflow). Searching for the item follows the same procedure until the item is located, an open slot is found, or the entire table has been searched (item not in table).
=== Specialized uses ===
Hash functions are also used to build caches for large data sets stored in slow media. A cache is generally simpler than a hashed search table, since any collision can be resolved by discarding or writing back the older of the two colliding items.
Hash functions are an essential ingredient of the Bloom filter, a space-efficient probabilistic data structure that is used to test whether an element is a member of a set.
A special case of hashing is known as geometric hashing or the grid method. In these applications, the set of all inputs is some sort of metric space, and the hashing function can be interpreted as a partition of that space into a grid of cells. The table is often an array with two or more indices (called a grid file, grid index, bucket grid, and similar names), and the hash function returns an index tuple. This principle is widely used in computer graphics, computational geometry, and many other disciplines, to solve many proximity problems in the plane or in three-dimensional space, such as finding closest pairs in a set of points, similar shapes in a list of shapes, similar images in an image database, and so on.
Hash tables are also used to implement associative arrays and dynamic sets.
== Properties ==
=== Uniformity ===
A good hash function should map the expected inputs as evenly as possible over its output range. That is, every hash value in the output range should be generated with roughly the same probability. The reason for this last requirement is that the cost of hashing-based methods goes up sharply as the number of collisions—pairs of inputs that are mapped to the same hash value—increases. If some hash values are more likely to occur than others, then a larger fraction of the lookup operations will have to search through a larger set of colliding table entries.
This criterion only requires the value to be uniformly distributed, not random in any sense. A good randomizing function is (barring computational efficiency concerns) generally a good choice as a hash function, but the converse need not be true.
Hash tables often contain only a small subset of the valid inputs. For instance, a club membership list may contain only a hundred or so member names, out of the very large set of all possible names. In these cases, the uniformity criterion should hold for almost all typical subsets of entries that may be found in the table, not just for the global set of all possible entries.
In other words, if a typical set of m records is hashed to n table slots, then the probability of a bucket receiving many more than m/n records should be vanishingly small. In particular, if m < n, then very few buckets should have more than one or two records. A small number of collisions is virtually inevitable, even if n is much larger than m—see the birthday problem.
In special cases when the keys are known in advance and the key set is static, a hash function can be found that achieves absolute (or collisionless) uniformity. Such a hash function is said to be perfect. There is no algorithmic way of constructing such a function—searching for one is a factorial function of the number of keys to be mapped versus the number of table slots that they are mapped into. Finding a perfect hash function over more than a very small set of keys is usually computationally infeasible; the resulting function is likely to be more computationally complex than a standard hash function and provides only a marginal advantage over a function with good statistical properties that yields a minimum number of collisions. See universal hash function.
=== Testing and measurement ===
When testing a hash function, the uniformity of the distribution of hash values can be evaluated by the chi-squared test. This test is a goodness-of-fit measure: it is the actual distribution of items in buckets versus the expected (or uniform) distribution of items. The formula is
∑
j
=
0
m
−
1
(
b
j
)
(
b
j
+
1
)
/
2
(
n
/
2
m
)
(
n
+
2
m
−
1
)
,
{\displaystyle {\frac {\sum _{j=0}^{m-1}(b_{j})(b_{j}+1)/2}{(n/2m)(n+2m-1)}},}
where n is the number of keys, m is the number of buckets, and bj is the number of items in bucket j.
A ratio within one confidence interval (such as 0.95 to 1.05) is indicative that the hash function evaluated has an expected uniform distribution.
Hash functions can have some technical properties that make it more likely that they will have a uniform distribution when applied. One is the strict avalanche criterion: whenever a single input bit is complemented, each of the output bits changes with a 50% probability. The reason for this property is that selected subsets of the keyspace may have low variability. For the output to be uniformly distributed, a low amount of variability, even one bit, should translate into a high amount of variability (i.e. distribution over the tablespace) in the output. Each bit should change with a probability of 50% because, if some bits are reluctant to change, then the keys become clustered around those values. If the bits want to change too readily, then the mapping is approaching a fixed XOR function of a single bit. Standard tests for this property have been described in the literature. The relevance of the criterion to a multiplicative hash function is assessed here.
=== Efficiency ===
In data storage and retrieval applications, the use of a hash function is a trade-off between search time and data storage space. If search time were unbounded, then a very compact unordered linear list would be the best medium; if storage space were unbounded, then a randomly accessible structure indexable by the key-value would be very large and very sparse, but very fast. A hash function takes a finite amount of time to map a potentially large keyspace to a feasible amount of storage space searchable in a bounded amount of time regardless of the number of keys. In most applications, the hash function should be computable with minimum latency and secondarily in a minimum number of instructions.
Computational complexity varies with the number of instructions required and latency of individual instructions, with the simplest being the bitwise methods (folding), followed by the multiplicative methods, and the most complex (slowest) are the division-based methods.
Because collisions should be infrequent, and cause a marginal delay but are otherwise harmless, it is usually preferable to choose a faster hash function over one that needs more computation but saves a few collisions.
Division-based implementations can be of particular concern because a division requires multiple cycles on nearly all processor microarchitectures. Division (modulo) by a constant can be inverted to become a multiplication by the word-size multiplicative-inverse of that constant. This can be done by the programmer, or by the compiler. Division can also be reduced directly into a series of shift-subtracts and shift-adds, though minimizing the number of such operations required is a daunting problem; the number of machine-language instructions resulting may be more than a dozen and swamp the pipeline. If the microarchitecture has hardware multiply functional units, then the multiply-by-inverse is likely a better approach.
We can allow the table size n to not be a power of 2 and still not have to perform any remainder or division operation, as these computations are sometimes costly. For example, let n be significantly less than 2b. Consider a pseudorandom number generator function P(key) that is uniform on the interval [0, 2b − 1]. A hash function uniform on the interval [0, n − 1] is n P(key) / 2b. We can replace the division by a (possibly faster) right bit shift: n P(key) >> b.
If keys are being hashed repeatedly, and the hash function is costly, then computing time can be saved by precomputing the hash codes and storing them with the keys. Matching hash codes almost certainly means that the keys are identical. This technique is used for the transposition table in game-playing programs, which stores a 64-bit hashed representation of the board position.
=== Universality ===
A universal hashing scheme is a randomized algorithm that selects a hash function h among a family of such functions, in such a way that the probability of a collision of any two distinct keys is 1/m, where m is the number of distinct hash values desired—independently of the two keys. Universal hashing ensures (in a probabilistic sense) that the hash function application will behave as well as if it were using a random function, for any distribution of the input data. It will, however, have more collisions than perfect hashing and may require more operations than a special-purpose hash function.
=== Applicability ===
A hash function that allows only certain table sizes or strings only up to a certain length, or cannot accept a seed (i.e. allow double hashing) is less useful than one that does.
A hash function is applicable in a variety of situations. Particularly within cryptography, notable applications include:
Integrity checking: Identical hash values for different files imply equality, providing a reliable means to detect file modifications.
Key derivation: Minor input changes result in a random-looking output alteration, known as the diffusion property. Thus, hash functions are valuable for key derivation functions.
Message authentication codes (MACs): Through the integration of a confidential key with the input data, hash functions can generate MACs ensuring the genuineness of the data, such as in HMACs.
Password storage: The password's hash value does not expose any password details, emphasizing the importance of securely storing hashed passwords on the server.
Signatures: Message hashes are signed rather than the whole message.
=== Deterministic ===
A hash procedure must be deterministic—for a given input value, it must always generate the same hash value. In other words, it must be a function of the data to be hashed, in the mathematical sense of the term. This requirement excludes hash functions that depend on external variable parameters, such as pseudo-random number generators or the time of day. It also excludes functions that depend on the memory address of the object being hashed, because the address may change during execution (as may happen on systems that use certain methods of garbage collection), although sometimes rehashing of the item is possible.
The determinism is in the context of the reuse of the function. For example, Python adds the feature that hash functions make use of a randomized seed that is generated once when the Python process starts in addition to the input to be hashed. The Python hash (SipHash) is still a valid hash function when used within a single run, but if the values are persisted (for example, written to disk), they can no longer be treated as valid hash values, since in the next run the random value might differ.
=== Defined range ===
It is often desirable that the output of a hash function have fixed size (but see below). If, for example, the output is constrained to 32-bit integer values, then the hash values can be used to index into an array. Such hashing is commonly used to accelerate data searches. Producing fixed-length output from variable-length input can be accomplished by breaking the input data into chunks of specific size. Hash functions used for data searches use some arithmetic expression that iteratively processes chunks of the input (such as the characters in a string) to produce the hash value.
=== Variable range ===
In many applications, the range of hash values may be different for each run of the program or may change along the same run (for instance, when a hash table needs to be expanded). In those situations, one needs a hash function which takes two parameters—the input data z, and the number n of allowed hash values.
A common solution is to compute a fixed hash function with a very large range (say, 0 to 232 − 1), divide the result by n, and use the division's remainder. If n is itself a power of 2, this can be done by bit masking and bit shifting. When this approach is used, the hash function must be chosen so that the result has fairly uniform distribution between 0 and n − 1, for any value of n that may occur in the application. Depending on the function, the remainder may be uniform only for certain values of n, e.g. odd or prime numbers.
=== Variable range with minimal movement (dynamic hash function) ===
When the hash function is used to store values in a hash table that outlives the run of the program, and the hash table needs to be expanded or shrunk, the hash table is referred to as a dynamic hash table.
A hash function that will relocate the minimum number of records when the table is resized is desirable. What is needed is a hash function H(z,n) (where z is the key being hashed and n is the number of allowed hash values) such that H(z,n + 1) = H(z,n) with probability close to n/(n + 1).
Linear hashing and spiral hashing are examples of dynamic hash functions that execute in constant time but relax the property of uniformity to achieve the minimal movement property. Extendible hashing uses a dynamic hash function that requires space proportional to n to compute the hash function, and it becomes a function of the previous keys that have been inserted. Several algorithms that preserve the uniformity property but require time proportional to n to compute the value of H(z,n) have been invented.
A hash function with minimal movement is especially useful in distributed hash tables.
=== Data normalization ===
In some applications, the input data may contain features that are irrelevant for comparison purposes. For example, when looking up a personal name, it may be desirable to ignore the distinction between upper and lower case letters. For such data, one must use a hash function that is compatible with the data equivalence criterion being used: that is, any two inputs that are considered equivalent must yield the same hash value. This can be accomplished by normalizing the input before hashing it, as by upper-casing all letters.
== Hashing integer data types ==
There are several common algorithms for hashing integers. The method giving the best distribution is data-dependent. One of the simplest and most common methods in practice is the modulo division method.
=== Identity hash function ===
If the data to be hashed is small enough, then one can use the data itself (reinterpreted as an integer) as the hashed value. The cost of computing this identity hash function is effectively zero. This hash function is perfect, as it maps each input to a distinct hash value.
The meaning of "small enough" depends on the size of the type that is used as the hashed value. For example, in Java, the hash code is a 32-bit integer. Thus the 32-bit integer Integer and 32-bit floating-point Float objects can simply use the value directly, whereas the 64-bit integer Long and 64-bit floating-point Double cannot.
Other types of data can also use this hashing scheme. For example, when mapping character strings between upper and lower case, one can use the binary encoding of each character, interpreted as an integer, to index a table that gives the alternative form of that character ("A" for "a", "8" for "8", etc.). If each character is stored in 8 bits (as in extended ASCII or ISO Latin 1), the table has only 28 = 256 entries; in the case of Unicode characters, the table would have 17 × 216 = 1114112 entries.
The same technique can be used to map two-letter country codes like "us" or "za" to country names (262 = 676 table entries), 5-digit ZIP codes like 13083 to city names (100000 entries), etc. Invalid data values (such as the country code "xx" or the ZIP code 00000) may be left undefined in the table or mapped to some appropriate "null" value.
=== Trivial hash function ===
If the keys are uniformly or sufficiently uniformly distributed over the key space, so that the key values are essentially random, then they may be considered to be already "hashed". In this case, any number of any bits in the key may be extracted and collated as an index into the hash table. For example, a simple hash function might mask off the m least significant bits and use the result as an index into a hash table of size 2m.
=== Mid-squares ===
A mid-squares hash code is produced by squaring the input and extracting an appropriate number of middle digits or bits. For example, if the input is 123456789 and the hash table size 10000, then squaring the key produces 15241578750190521, so the hash code is taken as the middle 4 digits of the 17-digit number (ignoring the high digit) 8750. The mid-squares method produces a reasonable hash code if there is not a lot of leading or trailing zeros in the key. This is a variant of multiplicative hashing, but not as good because an arbitrary key is not a good multiplier.
=== Division hashing ===
A standard technique is to use a modulo function on the key, by selecting a divisor M which is a prime number close to the table size, so h(K) ≡ K (mod M). The table size is usually a power of 2. This gives a distribution from {0, M − 1}. This gives good results over a large number of key sets. A significant drawback of division hashing is that division requires multiple cycles on most modern architectures (including x86) and can be 10 times slower than multiplication. A second drawback is that it will not break up clustered keys. For example, the keys 123000, 456000, 789000, etc. modulo 1000 all map to the same address. This technique works well in practice because many key sets are sufficiently random already, and the probability that a key set will be cyclical by a large prime number is small.
=== Algebraic coding ===
Algebraic coding is a variant of the division method of hashing which uses division by a polynomial modulo 2 instead of an integer to map n bits to m bits.: 512–513 In this approach, M = 2m, and we postulate an mth-degree polynomial Z(x) = xm + ζm−1xm−1 + ⋯ + ζ0. A key K = (kn−1…k1k0)2 can be regarded as the polynomial K(x) = kn−1xn−1 + ⋯ + k1x + k0. The remainder using polynomial arithmetic modulo 2 is K(x) mod Z(x) = hm−1xm−1 + ⋯ h1x + h0. Then h(K) = (hm−1…h1h0)2. If Z(x) is constructed to have t or fewer non-zero coefficients, then keys which share fewer than t bits are guaranteed to not collide.
Z is a function of k, t, and n (the last of which is a divisor of 2k − 1) and is constructed from the finite field GF(2k). Knuth gives an example: taking (n,m,t) = (15,10,7) yields Z(x) = x10 + x8 + x5 + x4 + x2 + x + 1. The derivation is as follows:
Let S be the smallest set of integers such that {1,2,…,t} ⊆ S and (2j mod n) ∈ S ∀j ∈ S.
Define
P
(
x
)
=
∏
j
∈
S
(
x
−
α
j
)
{\displaystyle P(x)=\prod _{j\in S}(x-\alpha ^{j})}
where α ∈n GF(2k) and where the coefficients of P(x) are computed in this field. Then the degree of P(x) = |S|. Since α2j is a root of P(x) whenever αj is a root, it follows that the coefficients pi of P(x) satisfy p2i = pi, so they are all 0 or 1. If R(x) = rn−1xn−1 + ⋯ + r1x + r0 is any nonzero polynomial modulo 2 with at most t nonzero coefficients, then R(x) is not a multiple of P(x) modulo 2. If follows that the corresponding hash function will map keys with fewer than t bits in common to unique indices.: 542–543
The usual outcome is that either n will get large, or t will get large, or both, for the scheme to be computationally feasible. Therefore, it is more suited to hardware or microcode implementation.: 542–543
=== Unique permutation hashing ===
Unique permutation hashing has a guaranteed best worst-case insertion time.
=== Multiplicative hashing ===
Standard multiplicative hashing uses the formula ha(K) = ⌊(aK mod W) / (W/M)⌋, which produces a hash value in {0, …, M − 1}. The value a is an appropriately chosen value that should be relatively prime to W; it should be large, and its binary representation a random mix of 1s and 0s. An important practical special case occurs when W = 2w and M = 2m are powers of 2 and w is the machine word size. In this case, this formula becomes ha(K) = ⌊(aK mod 2w) / 2w−m⌋. This is special because arithmetic modulo 2w is done by default in low-level programming languages and integer division by a power of 2 is simply a right-shift, so, in C, for example, this function becomes
and for fixed m and w this translates into a single integer multiplication and right-shift, making it one of the fastest hash functions to compute.
Multiplicative hashing is susceptible to a "common mistake" that leads to poor diffusion—higher-value input bits do not affect lower-value output bits. A transmutation on the input which shifts the span of retained top bits down and XORs or ADDs them to the key before the multiplication step corrects for this. The resulting function looks like:
=== Fibonacci hashing ===
Fibonacci hashing is a form of multiplicative hashing in which the multiplier is 2w / ϕ, where w is the machine word length and ϕ (phi) is the golden ratio (approximately 1.618). A property of this multiplier is that it uniformly distributes over the table space, blocks of consecutive keys with respect to any block of bits in the key. Consecutive keys within the high bits or low bits of the key (or some other field) are relatively common. The multipliers for various word lengths are:
16: a = 9E3716 = 4050310
32: a = 9E3779B916 = 265443576910
48: a = 9E3779B97F4B16 = 17396110258977110
64: a = 9E3779B97F4A7C1516 = 1140071481932319848510
The multiplier should be odd, so the least significant bit of the output is invertible modulo 2w. The last two values given above are rounded (up and down, respectively) by more than 1/2 of a least-significant bit to achieve this.
=== Zobrist hashing ===
Tabulation hashing, more generally known as Zobrist hashing after Albert Zobrist, is a method for constructing universal families of hash functions by combining table lookup with XOR operations. This algorithm has proven to be very fast and of high quality for hashing purposes (especially hashing of integer-number keys).
Zobrist hashing was originally introduced as a means of compactly representing chess positions in computer game-playing programs. A unique random number was assigned to represent each type of piece (six each for black and white) on each space of the board. Thus a table of 64×12 such numbers is initialized at the start of the program. The random numbers could be any length, but 64 bits was natural due to the 64 squares on the board. A position was transcribed by cycling through the pieces in a position, indexing the corresponding random numbers (vacant spaces were not included in the calculation) and XORing them together (the starting value could be 0 (the identity value for XOR) or a random seed). The resulting value was reduced by modulo, folding, or some other operation to produce a hash table index. The original Zobrist hash was stored in the table as the representation of the position.
Later, the method was extended to hashing integers by representing each byte in each of 4 possible positions in the word by a unique 32-bit random number. Thus, a table of 28×4 random numbers is constructed. A 32-bit hashed integer is transcribed by successively indexing the table with the value of each byte of the plain text integer and XORing the loaded values together (again, the starting value can be the identity value or a random seed). The natural extension to 64-bit integers is by use of a table of 28×8 64-bit random numbers.
This kind of function has some nice theoretical properties, one of which is called 3-tuple independence, meaning that every 3-tuple of keys is equally likely to be mapped to any 3-tuple of hash values.
=== Customized hash function ===
A hash function can be designed to exploit existing entropy in the keys. If the keys have leading or trailing zeros, or particular fields that are unused, always zero or some other constant, or generally vary little, then masking out only the volatile bits and hashing on those will provide a better and possibly faster hash function. Selected divisors or multipliers in the division and multiplicative schemes may make more uniform hash functions if the keys are cyclic or have other redundancies.
== Hashing variable-length data ==
When the data values are long (or variable-length) character strings—such as personal names, web page addresses, or mail messages—their distribution is usually very uneven, with complicated dependencies. For example, text in any natural language has highly non-uniform distributions of characters, and character pairs, characteristic of the language. For such data, it is prudent to use a hash function that depends on all characters of the string—and depends on each character in a different way.
=== Middle and ends ===
Simplistic hash functions may add the first and last n characters of a string along with the length, or form a word-size hash from the middle 4 characters of a string. This saves iterating over the (potentially long) string, but hash functions that do not hash on all characters of a string can readily become linear due to redundancies, clustering, or other pathologies in the key set. Such strategies may be effective as a custom hash function if the structure of the keys is such that either the middle, ends, or other fields are zero or some other invariant constant that does not differentiate the keys; then the invariant parts of the keys can be ignored.
=== Character folding ===
The paradigmatic example of folding by characters is to add up the integer values of all the characters in the string. A better idea is to multiply the hash total by a constant, typically a sizable prime number, before adding in the next character, ignoring overflow. Using exclusive-or instead of addition is also a plausible alternative. The final operation would be a modulo, mask, or other function to reduce the word value to an index the size of the table. The weakness of this procedure is that information may cluster in the upper or lower bits of the bytes; this clustering will remain in the hashed result and cause more collisions than a proper randomizing hash. ASCII byte codes, for example, have an upper bit of 0, and printable strings do not use the last byte code or most of the first 32 byte codes, so the information, which uses the remaining byte codes, is clustered in the remaining bits in an unobvious manner.
The classic approach, dubbed the PJW hash based on the work of Peter J. Weinberger at Bell Labs in the 1970s, was originally designed for hashing identifiers into compiler symbol tables as given in the "Dragon Book". This hash function offsets the bytes 4 bits before adding them together. When the quantity wraps, the high 4 bits are shifted out and if non-zero, xored back into the low byte of the cumulative quantity. The result is a word-size hash code to which a modulo or other reducing operation can be applied to produce the final hash index.
Today, especially with the advent of 64-bit word sizes, much more efficient variable-length string hashing by word chunks is available.
=== Word length folding ===
Modern microprocessors will allow for much faster processing if 8-bit character strings are not hashed by processing one character at a time, but by interpreting the string as an array of 32-bit or 64-bit integers and hashing/accumulating these "wide word" integer values by means of arithmetic operations (e.g. multiplication by constant and bit-shifting). The final word, which may have unoccupied byte positions, is filled with zeros or a specified randomizing value before being folded into the hash. The accumulated hash code is reduced by a final modulo or other operation to yield an index into the table.
=== Radix conversion hashing ===
Analogous to the way an ASCII or EBCDIC character string representing a decimal number is converted to a numeric quantity for computing, a variable-length string can be converted as xk−1ak−1 + xk−2ak−2 + ⋯ + x1a + x0. This is simply a polynomial in a radix a > 1 that takes the components (x0,x1,...,xk−1) as the characters of the input string of length k. It can be used directly as the hash code, or a hash function applied to it to map the potentially large value to the hash table size. The value of a is usually a prime number large enough to hold the number of different characters in the character set of potential keys. Radix conversion hashing of strings minimizes the number of collisions. Available data sizes may restrict the maximum length of string that can be hashed with this method. For example, a 128-bit word will hash only a 26-character alphabetic string (ignoring case) with a radix of 29; a printable ASCII string is limited to 9 characters using radix 97 and a 64-bit word. However, alphabetic keys are usually of modest length, because keys must be stored in the hash table. Numeric character strings are usually not a problem; 64 bits can count up to 1019, or 19 decimal digits with radix 10.
=== Rolling hash ===
In some applications, such as substring search, one can compute a hash function h for every k-character substring of a given n-character string by advancing a window of width k characters along the string, where k is a fixed integer, and n > k. The straightforward solution, which is to extract such a substring at every character position in the text and compute h separately, requires a number of operations proportional to k·n. However, with the proper choice of h, one can use the technique of rolling hash to compute all those hashes with an effort proportional to mk + n where m is the number of occurrences of the substring.
The most familiar algorithm of this type is Rabin-Karp with best and average case performance O(n+mk) and worst case O(n·k) (in all fairness, the worst case here is gravely pathological: both the text string and substring are composed of a repeated single character, such as t="AAAAAAAAAAA", and s="AAA"). The hash function used for the algorithm is usually the Rabin fingerprint, designed to avoid collisions in 8-bit character strings, but other suitable hash functions are also used.
=== Fuzzy hash ===
=== Perceptual hash ===
== Analysis ==
Worst case results for a hash function can be assessed two ways: theoretical and practical. The theoretical worst case is the probability that all keys map to a single slot. The practical worst case is the expected longest probe sequence (hash function + collision resolution method). This analysis considers uniform hashing, that is, any key will map to any particular slot with probability 1/m, a characteristic of universal hash functions.
While Knuth worries about adversarial attack on real time systems, Gonnet has shown that the probability of such a case is "ridiculously small". His representation was that the probability of k of n keys mapping to a single slot is αk / (eα k!), where α is the load factor, n/m.
== History ==
The term hash offers a natural analogy with its non-technical meaning (to chop up or make a mess out of something), given how hash functions scramble their input data to derive their output.: 514 In his research for the precise origin of the term, Donald Knuth notes that, while Hans Peter Luhn of IBM appears to have been the first to use the concept of a hash function in a memo dated January 1953, the term itself did not appear in published literature until the late 1960s, in Herbert Hellerman's Digital Computer System Principles, even though it was already widespread jargon by then.: 547–548
== See also ==
== Notes ==
== References ==
== External links ==
The Goulburn Hashing Function (PDF) by Mayur Patel
Hash Function Construction for Textual and Geometrical Data Retrieval (PDF) Latest Trends on Computers, Vol.2, pp. 483–489, CSCC Conference, Corfu, 2010 | Wikipedia/Hash_functions |
In systems theory, a linear system is a mathematical model of a system based on the use of a linear operator.
Linear systems typically exhibit features and properties that are much simpler than the nonlinear case.
As a mathematical abstraction or idealization, linear systems find important applications in automatic control theory, signal processing, and telecommunications. For example, the propagation medium for wireless communication systems can often be
modeled by linear systems.
== Definition ==
A general deterministic system can be described by an operator, H, that maps an input, x(t), as a function of t to an output, y(t), a type of black box description.
A system is linear if and only if it satisfies the superposition principle, or equivalently both the additivity and homogeneity properties, without restrictions (that is, for all inputs, all scaling constants and all time.)
The superposition principle means that a linear combination of inputs to the system produces a linear combination of the individual zero-state outputs (that is, outputs setting the initial conditions to zero) corresponding to the individual inputs.
In a system that satisfies the homogeneity property, scaling the input always results in scaling the zero-state response by the same factor. In a system that satisfies the additivity property, adding two inputs always results in adding the corresponding two zero-state responses due to the individual inputs.
Mathematically, for a continuous-time system, given two arbitrary inputs
x
1
(
t
)
x
2
(
t
)
{\displaystyle {\begin{aligned}x_{1}(t)\\x_{2}(t)\end{aligned}}}
as well as their respective zero-state outputs
y
1
(
t
)
=
H
{
x
1
(
t
)
}
y
2
(
t
)
=
H
{
x
2
(
t
)
}
{\displaystyle {\begin{aligned}y_{1}(t)&=H\left\{x_{1}(t)\right\}\\y_{2}(t)&=H\left\{x_{2}(t)\right\}\end{aligned}}}
then a linear system must satisfy
α
y
1
(
t
)
+
β
y
2
(
t
)
=
H
{
α
x
1
(
t
)
+
β
x
2
(
t
)
}
{\displaystyle \alpha y_{1}(t)+\beta y_{2}(t)=H\left\{\alpha x_{1}(t)+\beta x_{2}(t)\right\}}
for any scalar values α and β, for any input signals x1(t) and x2(t), and for all time t.
The system is then defined by the equation H(x(t)) = y(t), where y(t) is some arbitrary function of time, and x(t) is the system state. Given y(t) and H, the system can be solved for x(t).
The behavior of the resulting system subjected to a complex input can be described as a sum of responses to simpler inputs. In nonlinear systems, there is no such relation.
This mathematical property makes the solution of modelling equations simpler than many nonlinear systems.
For time-invariant systems this is the basis of the impulse response or the frequency response methods (see LTI system theory), which describe a general input function x(t) in terms of unit impulses or frequency components.
Typical differential equations of linear time-invariant systems are well adapted to analysis using the Laplace transform in the continuous case, and the Z-transform in the discrete case (especially in computer implementations).
Another perspective is that solutions to linear systems comprise a system of functions which act like vectors in the geometric sense.
A common use of linear models is to describe a nonlinear system by linearization. This is usually done for mathematical convenience.
The previous definition of a linear system is applicable to SISO (single-input single-output) systems. For MIMO (multiple-input multiple-output) systems, input and output signal vectors (
x
1
(
t
)
{\displaystyle {\mathbf {x} }_{1}(t)}
,
x
2
(
t
)
{\displaystyle {\mathbf {x} }_{2}(t)}
,
y
1
(
t
)
{\displaystyle {\mathbf {y} }_{1}(t)}
,
y
2
(
t
)
{\displaystyle {\mathbf {y} }_{2}(t)}
) are considered instead of input and output signals (
x
1
(
t
)
{\displaystyle x_{1}(t)}
,
x
2
(
t
)
{\displaystyle x_{2}(t)}
,
y
1
(
t
)
{\displaystyle y_{1}(t)}
,
y
2
(
t
)
{\displaystyle y_{2}(t)}
.)
This definition of a linear system is analogous to the definition of a linear differential equation in calculus, and a linear transformation in linear algebra.
=== Examples ===
A simple harmonic oscillator obeys the differential equation:
m
d
2
(
x
)
d
t
2
=
−
k
x
.
{\displaystyle m{\frac {d^{2}(x)}{dt^{2}}}=-kx.}
If
H
(
x
(
t
)
)
=
m
d
2
(
x
(
t
)
)
d
t
2
+
k
x
(
t
)
,
{\displaystyle H(x(t))=m{\frac {d^{2}(x(t))}{dt^{2}}}+kx(t),}
then H is a linear operator. Letting y(t) = 0, we can rewrite the differential equation as H(x(t)) = y(t), which shows that a simple harmonic oscillator is a linear system.
Other examples of linear systems include those described by
y
(
t
)
=
k
x
(
t
)
{\displaystyle y(t)=k\,x(t)}
,
y
(
t
)
=
k
d
x
(
t
)
d
t
{\displaystyle y(t)=k\,{\frac {\mathrm {d} x(t)}{\mathrm {d} t}}}
,
y
(
t
)
=
k
∫
−
∞
t
x
(
τ
)
d
τ
{\displaystyle y(t)=k\,\int _{-\infty }^{t}x(\tau )\mathrm {d} \tau }
, and any system described by ordinary linear differential equations. Systems described by
y
(
t
)
=
k
{\displaystyle y(t)=k}
,
y
(
t
)
=
k
x
(
t
)
+
k
0
{\displaystyle y(t)=k\,x(t)+k_{0}}
,
y
(
t
)
=
sin
[
x
(
t
)
]
{\displaystyle y(t)=\sin {[x(t)]}}
,
y
(
t
)
=
cos
[
x
(
t
)
]
{\displaystyle y(t)=\cos {[x(t)]}}
,
y
(
t
)
=
x
2
(
t
)
{\displaystyle y(t)=x^{2}(t)}
,
y
(
t
)
=
x
(
t
)
{\textstyle y(t)={\sqrt {x(t)}}}
,
y
(
t
)
=
|
x
(
t
)
|
{\displaystyle y(t)=|x(t)|}
, and a system with odd-symmetry output consisting of a linear region and a saturation (constant) region, are non-linear because they don't always satisfy the superposition principle.
The output versus input graph of a linear system need not be a straight line through the origin. For example, consider a system described by
y
(
t
)
=
k
d
x
(
t
)
d
t
{\displaystyle y(t)=k\,{\frac {\mathrm {d} x(t)}{\mathrm {d} t}}}
(such as a constant-capacitance capacitor or a constant-inductance inductor). It is linear because it satisfies the superposition principle. However, when the input is a sinusoid, the output is also a sinusoid, and so its output-input plot is an ellipse centered at the origin rather than a straight line passing through the origin.
Also, the output of a linear system can contain harmonics (and have a smaller fundamental frequency than the input) even when the input is a sinusoid. For example, consider a system described by
y
(
t
)
=
(
1.5
+
cos
(
t
)
)
x
(
t
)
{\displaystyle y(t)=(1.5+\cos {(t)})\,x(t)}
. It is linear because it satisfies the superposition principle. However, when the input is a sinusoid of the form
x
(
t
)
=
cos
(
3
t
)
{\displaystyle x(t)=\cos {(3t)}}
, using product-to-sum trigonometric identities it can be easily shown that the output is
y
(
t
)
=
1.5
cos
(
3
t
)
+
0.5
cos
(
2
t
)
+
0.5
cos
(
4
t
)
{\displaystyle y(t)=1.5\cos {(3t)}+0.5\cos {(2t)}+0.5\cos {(4t)}}
, that is, the output doesn't consist only of sinusoids of same frequency as the input (3 rad/s), but instead also of sinusoids of frequencies 2 rad/s and 4 rad/s; furthermore, taking the least common multiple of the fundamental period of the sinusoids of the output, it can be shown the fundamental angular frequency of the output is 1 rad/s, which is different than that of the input.
== Time-varying impulse response ==
The time-varying impulse response h(t2, t1) of a linear system is defined as the response of the system at time t = t2 to a single impulse applied at time t = t1. In other words, if the input x(t) to a linear system is
x
(
t
)
=
δ
(
t
−
t
1
)
{\displaystyle x(t)=\delta (t-t_{1})}
where δ(t) represents the Dirac delta function, and the corresponding response y(t) of the system is
y
(
t
=
t
2
)
=
h
(
t
2
,
t
1
)
{\displaystyle y(t=t_{2})=h(t_{2},t_{1})}
then the function h(t2, t1) is the time-varying impulse response of the system. Since the system cannot respond before the input is applied the following causality condition must be satisfied:
h
(
t
2
,
t
1
)
=
0
,
t
2
<
t
1
{\displaystyle h(t_{2},t_{1})=0,t_{2}<t_{1}}
== The convolution integral ==
The output of any general continuous-time linear system is related to the input by an integral which may be written over a doubly infinite range because of the causality condition:
y
(
t
)
=
∫
−
∞
t
h
(
t
,
t
′
)
x
(
t
′
)
d
t
′
=
∫
−
∞
∞
h
(
t
,
t
′
)
x
(
t
′
)
d
t
′
{\displaystyle y(t)=\int _{-\infty }^{t}h(t,t')x(t')dt'=\int _{-\infty }^{\infty }h(t,t')x(t')dt'}
If the properties of the system do not depend on the time at which it is operated then it is said to be time-invariant and h is a function only of the time difference τ = t − t' which is zero for τ < 0 (namely t < t' ). By redefinition of h it is then possible to write the input-output relation equivalently in any of the ways,
y
(
t
)
=
∫
−
∞
t
h
(
t
−
t
′
)
x
(
t
′
)
d
t
′
=
∫
−
∞
∞
h
(
t
−
t
′
)
x
(
t
′
)
d
t
′
=
∫
−
∞
∞
h
(
τ
)
x
(
t
−
τ
)
d
τ
=
∫
0
∞
h
(
τ
)
x
(
t
−
τ
)
d
τ
{\displaystyle y(t)=\int _{-\infty }^{t}h(t-t')x(t')dt'=\int _{-\infty }^{\infty }h(t-t')x(t')dt'=\int _{-\infty }^{\infty }h(\tau )x(t-\tau )d\tau =\int _{0}^{\infty }h(\tau )x(t-\tau )d\tau }
Linear time-invariant systems are most commonly characterized by the Laplace transform of the impulse response function called the transfer function which is:
H
(
s
)
=
∫
0
∞
h
(
t
)
e
−
s
t
d
t
.
{\displaystyle H(s)=\int _{0}^{\infty }h(t)e^{-st}\,dt.}
In applications this is usually a rational algebraic function of s. Because h(t) is zero for negative t, the integral may equally be written over the doubly infinite range and putting s = iω follows the formula for the frequency response function:
H
(
i
ω
)
=
∫
−
∞
∞
h
(
t
)
e
−
i
ω
t
d
t
{\displaystyle H(i\omega )=\int _{-\infty }^{\infty }h(t)e^{-i\omega t}dt}
== Discrete-time systems ==
The output of any discrete time linear system is related to the input by the time-varying convolution sum:
y
[
n
]
=
∑
m
=
−
∞
n
h
[
n
,
m
]
x
[
m
]
=
∑
m
=
−
∞
∞
h
[
n
,
m
]
x
[
m
]
{\displaystyle y[n]=\sum _{m=-\infty }^{n}{h[n,m]x[m]}=\sum _{m=-\infty }^{\infty }{h[n,m]x[m]}}
or equivalently for a time-invariant system on redefining h,
y
[
n
]
=
∑
k
=
0
∞
h
[
k
]
x
[
n
−
k
]
=
∑
k
=
−
∞
∞
h
[
k
]
x
[
n
−
k
]
{\displaystyle y[n]=\sum _{k=0}^{\infty }{h[k]x[n-k]}=\sum _{k=-\infty }^{\infty }{h[k]x[n-k]}}
where
k
=
n
−
m
{\displaystyle k=n-m}
represents the lag time between the stimulus at time m and the response at time n.
== See also ==
Shift invariant system
Linear control
Linear time-invariant system
Nonlinear system
System analysis
System of linear equations
== References == | Wikipedia/Linear_theory |
Cardiotocography (CTG) is a technique used to monitor the fetal heartbeat and uterine contractions during pregnancy and labour. The machine used to perform the monitoring is called a cardiotocograph.
Fetal heart sounds were described as early as 350 years ago and approximately 200 years ago mechanical stethoscopes, such as the Pinard horn, were introduced in clinical practice.
Modern-day CTG was developed and introduced in the 1950s and early 1960s by Edward Hon, Roberto Caldeyro-Barcia and Konrad Hammacher. The first commercial fetal monitor (Hewlett-Packard 8020A) was released in 1968.
CTG monitoring is widely used to assess fetal well-being by identifying babies at risk of hypoxia (lack of oxygen). CTG is mainly used during labour. A review found that in the antenatal period (before labour), there is no evidence to suggest that monitoring women with high-risk pregnancies benefits the mother or baby, although research around this is old and should be interpreted with caution. Up-to-date research is needed to provide more information surrounding this practice.
A study found that CTG monitoring didn't significantly improve or worsen the outcome, in terms of preventable child death, post birth mortality, of pregnancy for high risk mothers. But the evidence examined in the study is quite old and there have been significant changes in medical care since then.
== Methods ==
External cardiotocography can be used for continuous or intermittent monitoring. The fetal heart rate and the activity of the uterine muscle are detected by two transducers placed on the mother's abdomen, with one above the fetal heart to monitor heart rate, and the other at the fundus of the uterus to measure frequency of contractions. Doppler ultrasound provides the information, which is recorded on a paper strip known as a cardiotocograph (CTG). External tocometry is useful for showing the beginning and end of contractions as well as their frequency, but not the strength of the contractions. The absolute values of pressure readings on an external tocometer are dependent on position and are not sensitive in people who are obese. In cases where information on the strength or precise timing of contractions is needed, an internal tocometer is more appropriate.
Internal cardiotocography uses an electronic transducer connected directly to the fetus. A wire electrode, sometimes called a spiral or scalp electrode, is attached to the fetal scalp through the cervical opening and is connected to the monitor. Internal monitoring provides a more accurate and consistent transmission of the fetal heart rate, as unlike external monitoring, it is not affected by factors such as movement. Internal monitoring may be used when external monitoring is inadequate, or if closer surveillance is needed. Internal tocometry can only be used if the amniotic sac is ruptured (either spontaneously or artificially) and the cervix is open. To gauge the strength of contractions, a small catheter (called an intrauterine pressure catheter or IUPC) is passed into the uterus past the fetus. Combined with an internal fetal monitor, an IUPC may give a more precise reading of the baby's heart rate and the strength of contractions.
A typical CTG reading is printed on paper and may be stored on a computer for later reference. The plotting speed (paper feed) is set at 3 cm/min in the U.S. and 1 cm/min in Europe. A variety of systems for centralized viewing of CTG have been installed in maternity hospitals in industrialised countries, allowing simultaneous monitoring of multiple tracings in one or more locations. Display of maternal vital signs, ST signals and an electronic partogram are available in the majority of these systems. A few of them have incorporated computer analysis of cardiotocographic signals or combined cardiotocographic and ST data analysis.
== Interpretation ==
In the US, the Eunice Kennedy Shriver National Institute of Child Health and Human Development sponsored a workshop to develop a standardized nomenclature for use in interpreting Intrapartum fetal heart rate and uterine contraction patterns. This nomenclature has been adopted by the Association of Women's Health, Obstetric and Neonatal Nurses (AWHONN), the American College of Obstetricians and Gynecologists (ACOG), and the Society for Maternal-Fetal Medicine.
The Royal College of Obstetricians and Gynaecologists and the Society of Obstetricians and Gynaecologists of Canada have also published consensus statements on standardized nomenclature for fetal heart rate patterns.
Interpretation of a CTG tracing requires both qualitative and quantitative description of several factors. This is commonly summed up in the following acronym, DR C BRAVADO:
DR: Define Risk
C: Contractions (uterine activity)
BRA: Baseline fetal heart rate (FHR)
V: Baseline FHR variability
A: Presence of accelerations
D: Periodic or episodic decelerations
O: Changes or trends of FHR patterns over time
=== Uterine activity ===
There are several factors used in assessing uterine activity.
Frequency: the number of contractions per unit time.
Duration: the amount of time from the start of a contraction to the end of the same contraction.
Resting tone: a measure of how relaxed the uterus is between contractions. With external monitoring, this necessitates the use of palpation to determine relative strength. With an IUPC, this is determined by assessing actual pressures as graphed on the paper.
Interval: the amount of time between the end of one contraction to the beginning of the next contraction.
The NICHD nomenclature defines uterine activity by quantifying the number of contractions present in a 10-minute window, averaged over 30 minutes. Uterine activity may be defined as:
Normal: 5 or fewer contractions in 10 minutes, averaged over a 30-minute window
Uterine tachysystole: more than 5 contractions in 10 minutes, averaged over a 30-minute window
=== Baseline fetal heart rate ===
The NICHD nomenclature defines baseline fetal heart rate as:
"The baseline FHR is determined by approximating the mean FHR rounded to increments of 5 beats per minute (bpm) during a 10-minute window, excluding accelerations and decelerations and periods of marked FHR variability (greater than 25 bpm). There must be at least 2 minutes of identifiable baseline segments (not necessarily contiguous) in any 10-minute window, or the baseline for that period is indeterminate. In such cases, it may be necessary to refer to the previous 10-minute window for determination of the baseline. An abnormal baseline is termed bradycardia when the baseline FHR is less than 110 bpm; it is termed tachycardia when the baseline FHR is greater than 160 bpm."
=== Baseline FHR variability ===
Moderate baseline fetal heart rate variability reflects the delivery of oxygen to the fetal central nervous system. Its presence is reassuring in predicting an absence of metabolic acidemia and hypoxic injury to the fetus at the time it is observed. In contrast, the presence of minimal baseline FHR variability, or an absence of FHR variability, does not reliably predict fetal acidemia or hypoxia; lack of moderate baseline FHR variability may be a result of the fetal sleep cycle, medications, extreme prematurity, congenital anomalies, or pre-existing neurological injury. Furthermore, increased (or marked) baseline FHR variability (see "Zigzag pattern" and "Saltatory pattern" sections below) is associated with adverse fetal and neonatal outcomes. Based on the duration of the change, increased (i.e. marked) baseline variability is divided into two terms: zigzag pattern and saltatory pattern of FHR. The NICHD nomenclature defines baseline FHR variability as:
Baseline FHR variability is determined in a 10-minute window, excluding accelerations and decelerations. Baseline FHR variability is defined as fluctuations in the baseline FHR that are irregular in amplitude and frequency. The fluctuations are visually quantitated as the amplitude of the peak-to-trough in beats per minute. Furthermore, the baseline FHR variability is categorized by the quantitated amplitude as:
Absent – undetectable
Minimal – greater than undetectable, but 5 or fewer beats per minute
Moderate – 6–25 beats per minute
Marked – 25 or more beats per minute
=== Zigzag pattern of fetal heart rate ===
A Zigzag pattern of fetal heart rate (FHR) is defined as FHR baseline amplitude changes of more than 25 beats per minute (bpm) with a minimum duration of 2 minutes and maximum of 30 minutes. However, according to another study, even a >1 min duration of the zigzag pattern is associated with an increased risk of adverse neonatal outcomes. Despite the similarities in the shape of the FHR patterns, the zigzag pattern is distinguished from the saltatory pattern by its duration. According to the International Federation of Gynaecology and Obstetrics (FIGO), a saltatory pattern is defined as FHR baseline amplitude changes of more than 25 bpm with durations of >30 minutes. In a recently published large obstetric cohort study of the zigzag pattern in almost 5,000 term deliveries in Helsinki University Central Hospital, Tarvonen et al. (2020) reported: "ZigZag pattern and late decelerations of FHR were associated with cord blood acidemia, low Apgar scores, need for intubation and resuscitation, NICU admission and neonatal hypoglycemia during the first 24 hours after birth." Furthermore, the "ZigZag pattern precedes late decelerations, and the fact that normal FHR pattern precedes the ZigZag pattern in the majority of the cases suggests that the ZigZag pattern is an early sign of fetal hypoxia, which emphasizes its clinical importance."
Furthermore, in the recent study of 5150 deliveries, the hypoxia-related ZigZag pattern was associated with cord blood acidemia, low 5-min Apgar scores at birth, and need for neonatal resuscitation after birth, indicating increased occurrence of fetal hypoxia in GDM pregnancies.
=== Saltatory pattern of fetal heart rate ===
A saltatory pattern of fetal heart rate is defined in cardiotocography (CTG) guidelines by FIGO as fetal heart rate (FHR) baseline amplitude changes of more than 25 beats per minute (bpm) with a duration of >30 minutes.
In a 1992 study, the saltatory pattern FHR was defined by O'Brien-Abel and Benedetti as "[f]etal heart baseline amplitude changes of greater than 25 bpm with an oscillatory frequency of greater than 6 per minutes for a minimum duration of 1 minute". The pathophysiology of the saltatory pattern is not well-known. It has been linked with rapidly progressing hypoxia, for example due to an umbilical cord compression, and it is presumed to be caused by an instability of the fetal central nervous system.
In a study by Nunes et al. (2014), four saltatory patterns in CTG exceeding 20 minutes in the last 30 minutes before birth were associated with fetal metabolic acidosis. According to this study, saltatory pattern is a relatively rare condition; only four cases were found from three large databases.
In a study by Tarvonen et al. (2019), it was demonstrated that the occurrence of saltatory pattern (already with the minimum duration of 2 minutes) in CTG tracings during labor was associated with fetal hypoxia indicated by high umbilical vein (UV) blood erythropoietin (EPO) levels and umbilical artery (UA) blood acidosis at birth in human fetuses. As saltatory patterns preceded late decelerations of fetal heart rate (FHR) in the majority of cases, saltatory pattern seems to be an early sign of fetal hypoxia. According to the authors, awareness on this gives obstetricians and midwives time to intensify electronic fetal monitoring and to plan possible interventions before fetal asphyxia occurs.
Due to a standardized terminology and to avoid miscommunication on CTG interpretation, it has been recently proposed in an exhaustive BJOG review of animal and human studies that terms such as saltatory pattern, ZigZag pattern and marked variability should be abandoned, and the common term "increased variability" should be used in clinical CTG guidelines.
=== Accelerations ===
The NICHD nomenclature defines an acceleration as a visually apparent abrupt increase in fetal heart rate. An abrupt increase is defined as an increase from the onset of acceleration to the peak in 30 seconds or less. To be called an acceleration, the peak must be at least 15 bpm, and the acceleration must last at least 15 seconds from the onset to return to baseline.
A prolonged acceleration is greater than 2 minutes but less than 10 minutes in duration, while an acceleration lasting 10 minutes or more is defined as a baseline change.
Before 32 weeks of gestation, accelerations are defined as having a peak of at least 10 bpm and a duration of at least 10 seconds.
=== Periodic or episodic decelerations ===
Periodic refers to decelerations that are associated with contractions; episodic refers to those not associated with contractions. There are four types of decelerations as defined by the NICHD nomenclature, all of which are visually assessed.
Early decelerations: a result of increased vagal tone due to compression of the fetal head during contractions. Monitoring usually shows a symmetrical, gradual decrease and return to baseline of FHR, which is associated with a uterine contraction. A 'gradual' deceleration has a time from onset to nadir of 30 seconds or more. Early decelerations begin and end at approximately the same time as contractions, and the low point of the fetal heart rate occurs at the peak of the contraction.
Late decelerations: a result of placental insufficiency, which can result in fetal distress. Monitoring usually shows symmetrical gradual decrease and return to baseline of the fetal heart rate in association with a uterine contraction. A 'gradual' deceleration has an onset to nadir of 30 seconds or more. In contrast to early deceleration, the low point of fetal heart rate occurs after the peak of the contraction, and returns to baseline after the contraction is complete.
Variable decelerations: generally a result of umbilical cord compression, and contractions may further compress a cord when it is wrapped around the neck or under the shoulder of the fetus. They are defined as abrupt decreases in fetal heart rate, with less than 30 seconds from the beginning of the decrease to the nadir of heart rate. The decrease in FHR is at least 15 beats per minute, lasting at least 15 seconds but less than 2 minutes in duration. When variable decelerations are associated with uterine contractions, their onset, depth, and duration commonly vary with successive uterine contractions.
Prolonged deceleration: a decrease in FHR from baseline of at least 15 bpm, lasting at least 2 minutes but less than 10 minutes. A deceleration of at least 10 minutes is a baseline change.
Additionally, decelerations can be recurrent or intermittent based on their frequency (more or less than 50% of the time) within a 20-minute window.
=== FHR pattern classification ===
Before 2008, fetal heart rate was classified as either "reassuring" or "nonreassuring". The NICHD workgroup proposed terminology for a three-tiered system to replace the older, undefined terms.
Category I (Normal): Tracings with all these findings present are strongly predictive of normal fetal acid-base status at the time of observation and the fetus can be followed in a standard manner:
Baseline rate 110–160 bpm,
Moderate variability,
Absence of late or variable decelerations,
Early decelerations and accelerations may or may not be present.
Category II (Indeterminate): Tracing is not predictive of abnormal fetal acid-base status. Evaluation and continued surveillance and reevaluations are indicated.
Bradycardia with normal baseline variability
Tachycardia
Minimal or Marked baseline variability of FHR
Accelerations: Absence of induced accelerations after fetal stimulation
Periodic or Episodic decelerations: Longer than 2 minutes but shorter than 10 minutes; recurrent late decelerations with moderate baseline variability
Variable decelerations with other characteristics such as slow return to baseline, overshoots of "shoulders" seen (humps on either side of deceleration)
Category III (Abnormal): Tracing is predictive of abnormal fetal acid-base status at the time of observation; this requires prompt evaluation and management.
Absence of baseline variability, with recurrent late/variable decelerations or bradycardia; or
Sinusoidal fetal heart rate.
=== Updated 2015 FIGO Intrapartum Fetal Monitoring Guidelines ===
FIGO has recently modified the guidelines on intrapartum fetal monitoring, proposing the following interpretation:
Normal: No hypoxia or acidosis; no intervention necessary to improve fetal oxygenation state.
Baseline 110–160 bpm
Variability 5–25 bpm
No repetitive decelerations (decelerations are defined as repetitive when associated with >50% contractions)
Suspicious: Low probability of hypoxia/acidosis, warrants action to correct reversible causes if identified, close monitoring or adjunctive methods.
Lacking at least one characteristic of normality, but with no pathological features.
Pathological: High probability of hypoxia/acidosis, requires immediate action to correct reversible causes, adjunctive methods, or if this is not possible expedite delivery. In acute situations, delivery should happen immediately.
Baseline <100 bpm
Reduced or increased variability or sinusoidal pattern
Repetitive late or prolonged decelerations for >30 min, or >20 min if reduced variability (decelerations are defined as repetitive when associated with >50% contractions)
Deceleration >5 minutes
== Benefits ==
According to the Cochrane review from February 2017, CTG was associated with fewer neonatal seizures but it is unclear if it had any impact on long-term neurodevelopmental outcomes. No clear differences in incidence of cerebral palsy, infant mortality, other standard measures of neonatal wellbeing, or any meaningful differences in long-term outcomes could be shown. Continuous CTG was associated with the higher rates of caesarean sections and instrumental vaginal births. The authors see the challenge in how to discuss these results with women to enable them to make an informed decision without compromising the normality of labour. Future research should focus on events that happen in pregnancy and labour that could be the cause of long-term problems for the baby.
== See also ==
Fetal stethoscope
Nonstress test (NST)
Biophysical profile (BPP)
== References == | Wikipedia/Cardiotocography |
Passive dynamics refers to the dynamical behavior of actuators, robots, or organisms when not drawing energy from a supply (e.g., batteries, fuel, ATP). Depending on the application, considering or altering the passive dynamics of a powered system can have drastic effects on performance, particularly energy economy, stability, and task bandwidth. Devices using no power source are considered "passive", and their behavior is fully described by their passive dynamics.
In some fields of robotics (legged robotics in particular), design and more relaxed control of passive dynamics has become a complementary (or even alternative) approach to joint-positioning control methods developed through the 20th century. Additionally, the passive dynamics of animals have been of interest to biomechanists and integrative biologists, as these dynamics often underlie biological motions and couple with neuromechanical control.
Particularly relevant fields for investigating and engineering passive dynamics include legged locomotion and manipulation.
== History ==
The term and its principles were developed by Tad McGeer in the late 1980s. While at Simon Fraser University in Burnaby, British Columbia, McGeer showed that a human-like frame can walk itself down a slope without requiring muscles or motors. Unlike traditional robots, which expend energy by using motors to control every motion, McGeer's early passive-dynamic machines relied only on gravity and the natural swinging of their limbs to move forward down a slope.
== Models ==
The original model for passive dynamics is based on human and animal leg motions. Completely actuated systems, such as the legs of the Honda Asimo robot, are not very efficient because each joint has a motor and control assembly. Human-like gaits are far more efficient because movement is sustained by the natural swing of the legs instead of motors placed at each joint.
Tad McGeer's 1990 paper "Passive Walking with Knees" provides an excellent overview on the advantages of knees for walking legs. He clearly demonstrates that knees have many practical advantages for walking systems. Knees, according to McGeer, solve the problem of feet colliding with the ground when the leg swings forward, and also offers more stability in some settings.
Passive dynamics is a valuable addition to the field of controls because it approaches the control of a system as a combination of mechanical and electrical elements. While control methods have always been based on the mechanical actions (physics) of a system, passive dynamics utilizes the discovery of morphological computation. Morphological computation is the ability of the mechanical system to accomplish control functions.
== Applying passive dynamics ==
Adding actuation to passive dynamic walkers result in highly efficient robotic walkers. Such walkers can be implemented at lower mass and use less energy because they walk effectively with only a couple of motors. This combination results in a superior "specific cost of transport".
Energy efficiency in level-ground transport is quantified in terms of the dimensionless "specific cost of transport", which is the amount of energy required to carry a unit weight a unit distance. Passive dynamic walkers such as the Cornell Efficient Biped have the same specific cost of transport as humans, 0.20. Not incidentally, passive dynamic walkers have human-like gaits. By comparison, Honda's biped ASIMO, which does not utilize the passive dynamics of its own limbs, has a specific cost of transport of 3.23.
The current distance record for walking robots, 65.17 km, is held by the passive dynamics based Cornell Ranger.
Passive dynamics have recently found a role in the design and control of prosthetics. Since passive dynamics provides the mathematical models of efficient motion, it is an appropriate avenue to develop efficient limbs that require less energy for amputees. Andrew Hansen, Steven Gard and others have done extensive research in developing better foot prosthetics by utilizing passive dynamics.
Passive walking biped robots exhibit different kinds of chaotic behaviors e.g., bifurcation, intermittency and crisis.
== See also ==
Underactuation
== References ==
== Bibliography ==
Tad McGeer (April 1990). "Passive dynamic walking". International Journal of Robotics Research.
V. A. Tucker (1975). "The energetic cost of moving about". American Scientist. 63 (4): 413–419. Bibcode:1975AmSci..63..413T. PMID 1137237.
Steve H Collins; Martijn Wisse; Andy Ruina (2001). "A 3-D Passive Dynamic Walking Robot with Two Legs and Knees". International Journal of Robotics Research. 20 (7): 607–615. doi:10.1177/02783640122067561. S2CID 12350943.
Steve H Collins; Martijn Wisse; Andy Ruina; Russ Tedrake (2005). "Efficient bipedal robots based on passive-dynamic Walkers". Science. 307 (5712): 1082–1085. Bibcode:2005Sci...307.1082C. doi:10.1126/science.1107799. PMID 15718465. S2CID 1315227. and Steve H Collins; Andy Ruina (2005). "A bipedal walking robot with efficient and human-like gait". Proc. IEEE International Conference on Robotics and Automation.
Chandana Paul (2004). "Morphology and Computation". Proceedings of the International Conference on the Simulation of Adaptive Behaviour: 33–38.
== External links ==
Cornell Biorobotics and Locomotion Lab — videos and papers on passive dynamic walkers, including McGeer's originals, the Cornell Efficient Walker, and the Cornell Ranger
Droid Logic — simulations of passive dynamic walkers and runners created using evolutionary robotics
MIT Leg Lab — walking and running robots that utilize natural dynamics
Steve Collins' Robots page — the Cornell Efficient Walker, its passive predecessor, and additional references | Wikipedia/Passive_dynamics |
In lab experiments that study chaos theory, approaches designed to control chaos are based on certain observed system behaviors. Any chaotic attractor contains an infinite number of unstable, periodic orbits. Chaotic dynamics, then, consists of a motion where the system state moves in the neighborhood of one of these orbits for a while, then falls close to a different unstable, periodic orbit where it remains for a limited time and so forth. This results in a complicated and unpredictable wandering over longer periods of time.
Control of chaos is the stabilization, by means of small system perturbations, of one of these unstable periodic orbits. The result is to render an otherwise chaotic motion more stable and predictable, which is often an advantage. The perturbation must be tiny compared to the overall size of the attractor of the system to avoid significant modification of the system's natural dynamics.
Several techniques have been devised for chaos control, but most are developments of two basic approaches: the Ott–Grebogi–Yorke (OGY) method and Pyragas continuous control. Both methods require a previous determination of the unstable periodic orbits of the chaotic system before the controlling algorithm can be designed.
== OGY method ==
Edward Ott, Celso Grebogi and James A. Yorke were the first to make the key observation that the infinite number of unstable periodic orbits typically embedded in a chaotic attractor could be taken advantage of for the purpose of achieving control by means of applying only very small perturbations. After making this general point, they illustrated it with a specific method, since called the Ott–Grebogi–Yorke (OGY) method of achieving stabilization of a chosen unstable periodic orbit. In the OGY method, small, wisely chosen, kicks are applied to the system once per cycle, to maintain it near the desired unstable periodic orbit.
To start, one obtains information about the chaotic system by analyzing a slice of the chaotic attractor. This slice is a Poincaré section. After the information about the section has been gathered, one allows the system to run and waits until it comes near a desired periodic orbit in the section. Next, the system is encouraged to remain on that orbit by perturbing the appropriate parameter. When the control parameter is actually changed, the chaotic attractor is shifted and distorted somewhat. If all goes according to plan, the new attractor encourages the system to continue on the desired trajectory. One strength of this method is that it does not require a detailed model of the chaotic system but only some information about the Poincaré section. It is for this reason that the method has been so successful in controlling a wide variety of chaotic systems.
The weaknesses of this method are in isolating the Poincaré section and in calculating the precise perturbations necessary to attain stability.
== Pyragas method ==
In the Pyragas method of stabilizing a periodic orbit, an appropriate continuous controlling signal is injected into the system, whose intensity is practically zero as the system evolves close to the desired periodic orbit but increases when it drifts away from the desired orbit. Both the Pyragas and OGY methods are part of a general class of methods called "closed loop" or "feedback" methods which can be applied based on knowledge of the system obtained through solely observing the behavior of the system as a whole over a suitable period of time. The method was proposed by Lithuanian physicist Kęstutis Pyragas.
== Applications ==
Experimental control of chaos by one or both of these methods has been achieved in a variety of systems, including turbulent fluids, oscillating chemical reactions, magneto-mechanical oscillators and cardiac tissues. attempt the control of chaotic bubbling with the OGY method and using electrostatic potential as the primary control variable.
Forcing two systems into the same state is not the only way to achieve synchronization of chaos. Both control of chaos and synchronization constitute parts of cybernetical physics, a research area on the border between physics and control theory.
== References ==
== External links ==
Chaos control bibliography (1997–2000) | Wikipedia/Control_of_chaos |
In mathematics, iterated function systems (IFSs) are a method of constructing fractals; the resulting fractals are often self-similar. IFS fractals are more related to set theory than fractal geometry. They were introduced in 1981.
IFS fractals, as they are normally called, can be of any number of dimensions, but are commonly computed and drawn in 2D. The fractal is made up of the union of several copies of itself, each copy being transformed by a function (hence "function system"). The canonical example is the Sierpiński triangle. The functions are normally contractive, which means they bring points closer together and make shapes smaller. Hence, the shape of an IFS fractal is made up of several possibly-overlapping smaller copies of itself, each of which is also made up of copies of itself, ad infinitum. This is the source of its self-similar fractal nature.
== Definition ==
Formally, an iterated function system is a finite set of contraction mappings on a complete metric space. Symbolically,
{
f
i
:
X
→
X
∣
i
=
1
,
2
,
…
,
N
}
,
N
∈
N
{\displaystyle \{f_{i}:X\to X\mid i=1,2,\dots ,N\},\ N\in \mathbb {N} }
is an iterated function system if each
f
i
{\displaystyle f_{i}}
is a contraction on the complete metric space
X
{\displaystyle X}
.
== Properties ==
Hutchinson showed that, for the metric space
R
n
{\displaystyle \mathbb {R} ^{n}}
, or more generally, for a complete metric space
X
{\displaystyle X}
, such a system of functions has a unique nonempty compact (closed and bounded) fixed set S. One way of constructing a fixed set is to start with an initial nonempty closed and bounded set S0 and iterate the actions of the fi, taking Sn+1 to be the union of the images of Sn under the fi; then taking S to be the closure of the limit
lim
n
→
∞
S
n
{\displaystyle \lim _{n\rightarrow \infty }S_{n}}
. Symbolically, the unique fixed (nonempty compact) set
S
⊆
X
{\displaystyle S\subseteq X}
has the property
S
=
⋃
i
=
1
N
f
i
(
S
)
¯
.
{\displaystyle S={\overline {\bigcup _{i=1}^{N}f_{i}(S)}}.}
The set S is thus the fixed set of the Hutchinson operator
F
:
2
X
→
2
X
{\displaystyle F:2^{X}\to 2^{X}}
defined for
A
⊆
X
{\displaystyle A\subseteq X}
via
F
(
A
)
=
⋃
i
=
1
N
f
i
(
A
)
¯
.
{\displaystyle F(A)={\overline {\bigcup _{i=1}^{N}f_{i}(A)}}.}
The existence and uniqueness of S is a consequence of the contraction mapping principle, as is the fact that
lim
n
→
∞
F
n
(
A
)
=
S
{\displaystyle \lim _{n\to \infty }F^{n}(A)=S}
for any nonempty compact set
A
{\displaystyle A}
in
X
{\displaystyle X}
. (For contractive IFS this convergence takes place even for any nonempty closed bounded set
A
{\displaystyle A}
). Random elements arbitrarily close to S may be obtained by the "chaos game," described below.
Recently it was shown that the IFSs of non-contractive type (i.e. composed of maps that are not contractions with respect to any topologically equivalent metric in X) can yield attractors.
These arise naturally in projective spaces, though classical irrational rotation on the circle can be adapted too.
The collection of functions
f
i
{\displaystyle f_{i}}
generates a monoid under composition. If there are only two such functions, the monoid can be visualized as a binary tree, where, at each node of the tree, one may compose with the one or the other function (i.e. take the left or the right branch). In general, if there are k functions, then one may visualize the monoid as a full k-ary tree, also known as a Cayley tree.
== Constructions ==
Sometimes each function
f
i
{\displaystyle f_{i}}
is required to be a linear, or more generally an affine, transformation, and hence represented by a matrix. However, IFSs may also be built from non-linear functions, including projective transformations and Möbius transformations. The Fractal flame is an example of an IFS with nonlinear functions.
The most common algorithm to compute IFS fractals is called the "chaos game". It consists of picking a random point in the plane, then iteratively applying one of the functions chosen at random from the function system to transform the point to get a next point. An alternative algorithm is to generate each possible sequence of functions up to a given maximum length, and then to plot the results of applying each of these sequences of functions to an initial point or shape.
Each of these algorithms provides a global construction which generates points distributed across the whole fractal. If a small area of the fractal is being drawn, many of these points will fall outside of the screen boundaries. This makes zooming into an IFS construction drawn in this manner impractical.
Although the theory of IFS requires each function to be contractive, in practice software that implements IFS only require that the whole system be contractive on average.
== Partitioned iterated function systems ==
PIFS (partitioned iterated function systems), also called local iterated function systems, give surprisingly good image compression, even for photographs that don't seem to have the kinds of self-similar structure shown by simple IFS fractals.
== The inverse problem ==
Very fast algorithms exist to generate an image from a set of IFS or PIFS parameters. It is faster and requires much less storage space to store a description of how it was created, transmit that description to a destination device, and regenerate that image anew on the destination device, than to store and transmit the color of each pixel in the image.
The inverse problem is more difficult: given some original arbitrary digital image such as a digital photograph, try to find a set of IFS parameters which, when evaluated by iteration, produces another image visually similar to the original.
In 1989, Arnaud Jacquin presented a solution to a restricted form of the inverse problem using only PIFS; the general form of the inverse problem remains unsolved.
As of 1995, all fractal compression software is based on Jacquin's approach.
== Examples ==
The diagram shows the construction on an IFS from two affine functions. The functions are represented by their effect on the bi-unit square (the function transforms the outlined square into the shaded square). The combination of the two functions forms the Hutchinson operator. Three iterations of the operator are shown, and then the final image is of the fixed point, the final fractal.
Early examples of fractals which may be generated by an IFS include the Cantor set, first described in 1884; and de Rham curves, a type of self-similar curve described by Georges de Rham in 1957.
== History ==
IFSs were conceived in their present form by John E. Hutchinson in 1981 and popularized by Michael Barnsley's book Fractals Everywhere.
IFSs provide models for certain plants, leaves, and ferns, by virtue of the self-similarity which often occurs in branching structures in nature.
== See also ==
Complex-base system
Collage theorem
Infinite compositions of analytic functions
L-system
Fractal compression
== Notes ==
== References ==
Draves, Scott; Erik Reckase (July 2007). "The Fractal Flame Algorithm" (PDF). Archived from the original (PDF) on 2008-05-09. Retrieved 2008-07-17.
Falconer, Kenneth (1990). Fractal geometry: Mathematical foundations and applications. John Wiley and Sons. pp. 113–117, 136. ISBN 0-471-92287-0.
Barnsley, Michael; Andrew Vince (2011). "The Chaos Game on a General Iterated Function System". Ergodic Theory Dynam. Systems. 31 (4): 1073–1079. arXiv:1005.0322. Bibcode:2010arXiv1005.0322B. doi:10.1017/S0143385710000428. S2CID 122674315.
For an historical overview, and the generalization : David, Claire (2019). "fractal properties of Weierstrass-type functions". Proceedings of the International Geometry Center. 12 (2): 43–61. doi:10.15673/tmgc.v12i2.1485. S2CID 209964068.
== External links ==
A Primer on the Elementary Theory of Infinite Compositions of Complex Functions | Wikipedia/Iterated_function_system |
The dyadic transformation (also known as the dyadic map, bit shift map, 2x mod 1 map, Bernoulli map, doubling map or sawtooth map) is the mapping (i.e., recurrence relation)
T
:
[
0
,
1
)
→
[
0
,
1
)
∞
{\displaystyle T:[0,1)\to [0,1)^{\infty }}
x
↦
(
x
0
,
x
1
,
x
2
,
…
)
{\displaystyle x\mapsto (x_{0},x_{1},x_{2},\ldots )}
(where
[
0
,
1
)
∞
{\displaystyle [0,1)^{\infty }}
is the set of sequences from
[
0
,
1
)
{\displaystyle [0,1)}
) produced by the rule
x
0
=
x
{\displaystyle x_{0}=x}
for all
n
≥
0
,
x
n
+
1
=
(
2
x
n
)
mod
1
{\displaystyle {\text{for all }}n\geq 0,\ x_{n+1}=(2x_{n}){\bmod {1}}}
.
Equivalently, the dyadic transformation can also be defined as the iterated function map of the piecewise linear function
T
(
x
)
=
{
2
x
0
≤
x
<
1
2
2
x
−
1
1
2
≤
x
<
1.
{\displaystyle T(x)={\begin{cases}2x&0\leq x<{\frac {1}{2}}\\2x-1&{\frac {1}{2}}\leq x<1.\end{cases}}}
The name bit shift map arises because, if the value of an iterate is written in binary notation, the next iterate is obtained by shifting the binary point one bit to the right, and if the bit to the left of the new binary point is a "one", replacing it with a zero.
The dyadic transformation provides an example of how a simple 1-dimensional map can give rise to chaos. This map readily generalizes to several others. An important one is the beta transformation, defined as
T
β
(
x
)
=
β
x
mod
1
{\displaystyle T_{\beta }(x)=\beta x{\bmod {1}}}
. This map has been extensively studied by many authors. It was introduced by Alfréd Rényi in 1957, and an invariant measure for it was given by Alexander Gelfond in 1959 and again independently by Bill Parry in 1960.
== Relation to the Bernoulli process ==
The map can be obtained as a homomorphism on the Bernoulli process. Let
Ω
=
{
H
,
T
}
N
{\displaystyle \Omega =\{H,T\}^{\mathbb {N} }}
be the set of all semi-infinite strings of the letters
H
{\displaystyle H}
and
T
{\displaystyle T}
. These can be understood to be the flips of a coin, coming up heads or tails. Equivalently, one can write
Ω
=
{
0
,
1
}
N
{\displaystyle \Omega =\{0,1\}^{\mathbb {N} }}
the space of all (semi-)infinite strings of binary bits. The word "infinite" is qualified with "semi-", as one can also define a different space
{
0
,
1
}
Z
{\displaystyle \{0,1\}^{\mathbb {Z} }}
consisting of all doubly-infinite (double-ended) strings; this will lead to the Baker's map. The qualification "semi-" is dropped below.
This space has a natural shift operation, given by
T
(
b
0
,
b
1
,
b
2
,
…
)
=
(
b
1
,
b
2
,
…
)
{\displaystyle T(b_{0},b_{1},b_{2},\dots )=(b_{1},b_{2},\dots )}
where
(
b
0
,
b
1
,
…
)
{\displaystyle (b_{0},b_{1},\dots )}
is an infinite string of binary digits. Given such a string, write
x
=
∑
n
=
0
∞
b
n
2
n
+
1
.
{\displaystyle x=\sum _{n=0}^{\infty }{\frac {b_{n}}{2^{n+1}}}.}
The resulting
x
{\displaystyle x}
is a real number in the unit interval
0
≤
x
≤
1.
{\displaystyle 0\leq x\leq 1.}
The shift
T
{\displaystyle T}
induces a homomorphism, also called
T
{\displaystyle T}
, on the unit interval. Since
T
(
b
0
,
b
1
,
b
2
,
…
)
=
(
b
1
,
b
2
,
…
)
,
{\displaystyle T(b_{0},b_{1},b_{2},\dots )=(b_{1},b_{2},\dots ),}
one can easily see that
T
(
x
)
=
2
x
mod
1
.
{\displaystyle T(x)=2x{\bmod {1}}.}
For the doubly-infinite sequence of bits
Ω
=
2
Z
,
{\displaystyle \Omega =2^{\mathbb {Z} },}
the induced homomorphism is the Baker's map.
The dyadic sequence is then just the sequence
(
x
,
T
(
x
)
,
T
2
(
x
)
,
T
3
(
x
)
,
…
)
{\displaystyle (x,T(x),T^{2}(x),T^{3}(x),\dots )}
That is,
x
n
=
T
n
(
x
)
.
{\displaystyle x_{n}=T^{n}(x).}
=== The Cantor set ===
Note that the sum
y
=
∑
n
=
0
∞
b
n
3
n
+
1
{\displaystyle y=\sum _{n=0}^{\infty }{\frac {b_{n}}{3^{n+1}}}}
gives the Cantor function, as conventionally defined. This is one reason why the set
{
H
,
T
}
N
{\displaystyle \{H,T\}^{\mathbb {N} }}
is sometimes called the Cantor set.
== Rate of information loss and sensitive dependence on initial conditions ==
One hallmark of chaotic dynamics is the loss of information as simulation occurs. If we start with information on the first s bits of the initial iterate, then after m simulated iterations (m < s) we only have s − m bits of information remaining. Thus we lose information at the exponential rate of one bit per iteration. After s iterations, our simulation has reached the fixed point zero, regardless of the true iterate values; thus we have suffered a complete loss of information. This illustrates sensitive dependence on initial conditions—the mapping from the truncated initial condition has deviated exponentially from the mapping from the true initial condition. And since our simulation has reached a fixed point, for almost all initial conditions it will not describe the dynamics in the qualitatively correct way as chaotic.
Equivalent to the concept of information loss is the concept of information gain. In practice some real-world process may generate a sequence of values (xn) over time, but we may only be able to observe these values in truncated form. Suppose for example that x0 = 0.1001101, but we only observe the truncated value 0.1001. Our prediction for x1 is 0.001. If we wait until the real-world process has generated the true x1 value 0.001101, we will be able to observe the truncated value 0.0011, which is more accurate than our predicted value 0.001. So we have received an information gain of one bit.
== Relation to tent map and logistic map ==
The dyadic transformation is topologically semi-conjugate to the unit-height tent map. Recall that the unit-height tent map is given by
x
n
+
1
=
f
1
(
x
n
)
=
{
x
n
f
o
r
x
n
≤
1
/
2
1
−
x
n
f
o
r
x
n
≥
1
/
2
{\displaystyle x_{n+1}=f_{1}(x_{n})={\begin{cases}x_{n}&\mathrm {for} ~~x_{n}\leq 1/2\\1-x_{n}&\mathrm {for} ~~x_{n}\geq 1/2\end{cases}}}
The conjugacy is explicitly given by
S
(
x
)
=
sin
π
x
{\displaystyle S(x)=\sin \pi x}
so that
f
1
=
S
−
1
∘
T
∘
S
{\displaystyle f_{1}=S^{-1}\circ T\circ S}
That is,
f
1
(
x
)
=
S
−
1
(
T
(
S
(
x
)
)
)
.
{\displaystyle f_{1}(x)=S^{-1}(T(S(x))).}
This is stable under iteration, as
f
1
n
=
f
1
∘
⋯
∘
f
1
=
S
−
1
∘
T
∘
S
∘
S
−
1
∘
⋯
∘
T
∘
S
=
S
−
1
∘
T
n
∘
S
{\displaystyle f_{1}^{n}=f_{1}\circ \cdots \circ f_{1}=S^{-1}\circ T\circ S\circ S^{-1}\circ \cdots \circ T\circ S=S^{-1}\circ T^{n}\circ S}
It is also conjugate to the chaotic r = 4 case of the logistic map.
The r = 4 case of the logistic map is
z
n
+
1
=
4
z
n
(
1
−
z
n
)
{\displaystyle z_{n+1}=4z_{n}(1-z_{n})}
; this is related to the bit shift map in variable x by
z
n
=
sin
2
(
2
π
x
n
)
.
{\displaystyle z_{n}=\sin ^{2}(2\pi x_{n}).}
There is also a semi-conjugacy between the dyadic transformation (here named angle doubling map) and the quadratic polynomial. Here, the map doubles angles measured in turns. That is, the map is given by
θ
↦
2
θ
mod
2
π
.
{\displaystyle \theta \mapsto 2\theta {\bmod {2}}\pi .}
== Periodicity and non-periodicity ==
Because of the simple nature of the dynamics when the iterates are viewed in binary notation, it is easy to categorize the dynamics based on the initial condition:
If the initial condition is irrational (as almost all points in the unit interval are), then the dynamics are non-periodic—this follows directly from the definition of an irrational number as one with a non-repeating binary expansion. This is the chaotic case.
If x0 is rational the image of x0 contains a finite number of distinct values within [0, 1) and the forward orbit of x0 is eventually periodic, with period equal to the period of the binary expansion of x0. Specifically, if the initial condition is a rational number with a finite binary expansion of k bits, then after k iterations the iterates reach the fixed point 0;
if the initial condition is a rational number with a k-bit transient (k ≥ 0) followed by a q-bit sequence (q > 1) that repeats itself infinitely, then after k iterations the iterates reach a cycle of length q. Thus cycles of all lengths are possible.
For example, the forward orbit of 11/24 is:
11
24
↦
11
12
↦
5
6
↦
2
3
↦
1
3
↦
2
3
↦
1
3
↦
⋯
,
{\displaystyle {\frac {11}{24}}\mapsto {\frac {11}{12}}\mapsto {\frac {5}{6}}\mapsto {\frac {2}{3}}\mapsto {\frac {1}{3}}\mapsto {\frac {2}{3}}\mapsto {\frac {1}{3}}\mapsto \cdots ,}
which has reached a cycle of period 2. Within any subinterval of [0, 1), no matter how small, there are therefore an infinite number of points whose orbits are eventually periodic, and an infinite number of points whose orbits are never periodic. This sensitive dependence on initial conditions is a characteristic of chaotic maps.
=== Periodicity via bit shifts ===
The periodic and non-periodic orbits can be more easily understood not by working with the map
T
(
x
)
=
2
x
mod
1
{\displaystyle T(x)=2x{\bmod {1}}}
directly, but rather with the bit shift map
T
(
b
0
,
b
1
,
b
2
,
…
)
=
(
b
1
,
b
2
,
…
)
{\displaystyle T(b_{0},b_{1},b_{2},\dots )=(b_{1},b_{2},\dots )}
defined on the Cantor space
Ω
=
{
0
,
1
}
N
{\displaystyle \Omega =\{0,1\}^{\mathbb {N} }}
.
That is, the homomorphism
x
=
∑
n
=
0
∞
b
n
2
n
+
1
{\displaystyle x=\sum _{n=0}^{\infty }{\frac {b_{n}}{2^{n+1}}}}
is basically a statement that the Cantor set can be mapped into the reals. It is a surjection: every dyadic rational has not one, but two distinct representations in the Cantor set. For example,
0.1000000
⋯
=
0.011111
…
{\displaystyle 0.1000000\dots =0.011111\dots }
This is just the binary-string version of the famous 0.999... = 1 problem. The doubled representations hold in general: for any given finite-length initial sequence
b
0
,
b
1
,
b
2
,
…
,
b
k
−
1
{\displaystyle b_{0},b_{1},b_{2},\dots ,b_{k-1}}
of length
k
{\displaystyle k}
, one has
b
0
,
b
1
,
b
2
,
…
,
b
k
−
1
,
1
,
0
,
0
,
0
,
⋯
=
b
0
,
b
1
,
b
2
,
…
,
b
k
−
1
,
0
,
1
,
1
,
1
,
…
{\displaystyle b_{0},b_{1},b_{2},\dots ,b_{k-1},1,0,0,0,\dots =b_{0},b_{1},b_{2},\dots ,b_{k-1},0,1,1,1,\dots }
The initial sequence
b
0
,
b
1
,
b
2
,
…
,
b
k
−
1
{\displaystyle b_{0},b_{1},b_{2},\dots ,b_{k-1}}
corresponds to the non-periodic part of the orbit, after which iteration settles down to all zeros (equivalently, all-ones).
Expressed as bit strings, the periodic orbits of the map can be seen to the rationals. That is, after an initial "chaotic" sequence of
b
0
,
b
1
,
b
2
,
…
,
b
k
−
1
{\displaystyle b_{0},b_{1},b_{2},\dots ,b_{k-1}}
, a periodic orbit settles down into a repeating string
b
k
,
b
k
+
1
,
b
k
+
2
,
…
,
b
k
+
m
−
1
{\displaystyle b_{k},b_{k+1},b_{k+2},\dots ,b_{k+m-1}}
of length
m
{\displaystyle m}
. It is not hard to see that such repeating sequences correspond to rational numbers. Writing
y
=
∑
j
=
0
m
−
1
b
k
+
j
2
−
j
−
1
{\displaystyle y=\sum _{j=0}^{m-1}b_{k+j}2^{-j-1}}
one then clearly has
∑
j
=
0
∞
b
k
+
j
2
−
j
−
1
=
y
∑
j
=
0
∞
2
−
j
m
=
y
1
−
2
−
m
{\displaystyle \sum _{j=0}^{\infty }b_{k+j}2^{-j-1}=y\sum _{j=0}^{\infty }2^{-jm}={\frac {y}{1-2^{-m}}}}
Tacking on the initial non-repeating sequence, one clearly has a rational number. In fact, every rational number can be expressed in this way: an initial "random" sequence, followed by a cycling repeat. That is, the periodic orbits of the map are in one-to-one correspondence with the rationals.
This phenomenon is note-worthy, because something similar happens in many chaotic systems. For example, geodesics on compact manifolds can have periodic orbits that behave in this way.
Keep in mind, however, that the rationals are a set of measure zero in the reals. Almost all orbits are not periodic! The aperiodic orbits correspond to the irrational numbers. This property also holds true in a more general setting. An open question is to what degree the behavior of the periodic orbits constrain the behavior of the system as a whole. Phenomena such as Arnold diffusion suggest that the general answer is "not very much".
== Density formulation ==
Instead of looking at the orbits of individual points under the action of the map, it is equally worthwhile to explore how the map affects densities on the unit interval. That is, imagine sprinkling some dust on the unit interval; it is denser in some places than in others. What happens to this density as one iterates?
Write
ρ
:
[
0
,
1
]
→
R
{\displaystyle \rho :[0,1]\to \mathbb {R} }
as this density, so that
x
↦
ρ
(
x
)
{\displaystyle x\mapsto \rho (x)}
. To obtain the action of
T
{\displaystyle T}
on this density, one needs to find all points
y
=
T
−
1
(
x
)
{\displaystyle y=T^{-1}(x)}
and write
ρ
(
x
)
↦
∑
y
=
T
−
1
(
x
)
ρ
(
y
)
|
T
′
(
y
)
|
{\displaystyle \rho (x)\mapsto \sum _{y=T^{-1}(x)}{\frac {\rho (y)}{|T^{\prime }(y)|}}}
The denominator in the above is the Jacobian determinant of the transformation, here it is just the derivative of
T
{\displaystyle T}
and so
T
′
(
y
)
=
2
{\displaystyle T^{\prime }(y)=2}
. Also, there are obviously only two points in the preimage of
T
−
1
(
x
)
{\displaystyle T^{-1}(x)}
, these are
y
=
x
/
2
{\displaystyle y=x/2}
and
y
=
(
x
+
1
)
/
2.
{\displaystyle y=(x+1)/2.}
Putting it all together, one gets
ρ
(
x
)
↦
1
2
ρ
(
x
2
)
+
1
2
ρ
(
x
+
1
2
)
{\displaystyle \rho (x)\mapsto {\frac {1}{2}}\rho \!\left({\frac {x}{2}}\right)+{\frac {1}{2}}\rho \!\left({\frac {x+1}{2}}\right)}
By convention, such maps are denoted by
L
{\displaystyle {\mathcal {L}}}
so that in this case, write
[
L
T
ρ
]
(
x
)
=
1
2
ρ
(
x
2
)
+
1
2
ρ
(
x
+
1
2
)
{\displaystyle \left[{\mathcal {L}}_{T}\rho \right](x)={\frac {1}{2}}\rho \!\left({\frac {x}{2}}\right)+{\frac {1}{2}}\rho \!\left({\frac {x+1}{2}}\right)}
The map
L
T
{\displaystyle {\mathcal {L}}_{T}}
is a linear operator, as one easily sees that
L
T
(
f
+
g
)
=
L
T
(
f
)
+
L
T
(
g
)
{\displaystyle {\mathcal {L}}_{T}(f+g)={\mathcal {L}}_{T}(f)+{\mathcal {L}}_{T}(g)}
and
L
T
(
a
f
)
=
a
L
T
(
f
)
{\displaystyle {\mathcal {L}}_{T}(af)=a{\mathcal {L}}_{T}(f)}
for all functions
f
,
g
{\displaystyle f,g}
on the unit interval, and all constants
a
{\displaystyle a}
.
Viewed as a linear operator, the most obvious and pressing question is: what is its spectrum? One eigenvalue is obvious: if
ρ
(
x
)
=
1
{\displaystyle \rho (x)=1}
for all
x
{\displaystyle x}
then one obviously has
L
T
ρ
=
ρ
{\displaystyle {\mathcal {L}}_{T}\rho =\rho }
so the uniform density is invariant under the transformation. This is in fact the largest eigenvalue of the operator
L
T
{\displaystyle {\mathcal {L}}_{T}}
, it is the Frobenius–Perron eigenvalue. The uniform density is, in fact, nothing other than the invariant measure of the dyadic transformation.
To explore the spectrum of
L
T
{\displaystyle {\mathcal {L}}_{T}}
in greater detail, one must first limit oneself to a suitable space of functions (on the unit interval) to work with. This might be the space of Lebesgue measurable functions, or perhaps the space of square integrable functions, or perhaps even just polynomials. Working with any of these spaces is surprisingly difficult, although a spectrum can be obtained.
=== Borel space ===
A vast amount of simplification results if one instead works with the Cantor space
Ω
=
{
0
,
1
}
N
{\displaystyle \Omega =\{0,1\}^{\mathbb {N} }}
, and functions
ρ
:
Ω
→
R
.
{\displaystyle \rho :\Omega \to \mathbb {R} .}
Some caution is advised, as the map
T
(
x
)
=
2
x
mod
1
{\displaystyle T(x)=2x{\bmod {1}}}
is defined on the unit interval of the real number line, assuming the natural topology on the reals. By contrast, the map
T
(
b
0
,
b
1
,
b
2
,
…
)
=
(
b
1
,
b
2
,
…
)
{\displaystyle T(b_{0},b_{1},b_{2},\dots )=(b_{1},b_{2},\dots )}
is defined on the Cantor space
Ω
=
{
0
,
1
}
N
{\displaystyle \Omega =\{0,1\}^{\mathbb {N} }}
, which by convention is given a very different topology, the product topology. There is a potential clash of topologies; some care must be taken. However, as presented above, there is a homomorphism from the Cantor set into the reals; fortunately, it maps open sets into open sets, and thus preserves notions of continuity.
To work with the Cantor set
Ω
=
{
0
,
1
}
N
{\displaystyle \Omega =\{0,1\}^{\mathbb {N} }}
, one must provide a topology for it; by convention, this is the product topology. By adjoining set-complements, it can be extended to a Borel space, that is, a sigma algebra. The topology is that of cylinder sets. A cylinder set has the generic form
(
∗
,
∗
,
∗
,
…
,
∗
,
b
k
,
b
k
+
1
,
∗
,
…
,
∗
,
b
m
,
∗
,
…
)
{\displaystyle (*,*,*,\dots ,*,b_{k},b_{k+1},*,\dots ,*,b_{m},*,\dots )}
where the
∗
{\displaystyle *}
are arbitrary bit values (not necessarily all the same), and the
b
k
,
b
m
,
…
{\displaystyle b_{k},b_{m},\dots }
are a finite number of specific bit-values scattered in the infinite bit-string. These are the open sets of the topology. The canonical measure on this space is the Bernoulli measure for the fair coin-toss. If there is just one bit specified in the string of arbitrary positions, the measure is 1/2. If there are two bits specified, the measure is 1/4, and so on. One can get fancier: given a real number
0
<
p
<
1
{\displaystyle 0<p<1}
one can define a measure
μ
p
(
∗
,
…
,
∗
,
b
k
,
∗
,
…
)
=
p
n
(
1
−
p
)
m
{\displaystyle \mu _{p}(*,\dots ,*,b_{k},*,\dots )=p^{n}(1-p)^{m}}
if there are
n
{\displaystyle n}
heads and
m
{\displaystyle m}
tails in the sequence. The measure with
p
=
1
/
2
{\displaystyle p=1/2}
is preferred, since it is preserved by the map
(
b
0
,
b
1
,
b
2
,
…
)
↦
x
=
∑
n
=
0
∞
b
n
2
n
+
1
.
{\displaystyle (b_{0},b_{1},b_{2},\dots )\mapsto x=\sum _{n=0}^{\infty }{\frac {b_{n}}{2^{n+1}}}.}
So, for example,
(
0
,
∗
,
⋯
)
{\displaystyle (0,*,\cdots )}
maps to the interval
[
0
,
1
/
2
]
{\displaystyle [0,1/2]}
and
(
1
,
∗
,
…
)
{\displaystyle (1,*,\dots )}
maps to the interval
[
1
/
2
,
1
]
{\displaystyle [1/2,1]}
and both of these intervals have a measure of 1/2. Similarly,
(
∗
,
0
,
∗
,
…
)
{\displaystyle (*,0,*,\dots )}
maps to the interval
[
0
,
1
/
4
]
∪
[
1
/
2
,
3
/
4
]
{\displaystyle [0,1/4]\cup [1/2,3/4]}
which still has the measure 1/2. That is, the embedding above preserves the measure.
An alternative is to write
(
b
0
,
b
1
,
b
2
,
…
)
↦
x
=
∑
n
=
0
∞
[
b
n
p
n
+
1
+
(
1
−
b
n
)
(
1
−
p
)
n
+
1
]
{\displaystyle (b_{0},b_{1},b_{2},\dots )\mapsto x=\sum _{n=0}^{\infty }\left[b_{n}p^{n+1}+(1-b_{n})(1-p)^{n+1}\right]}
which preserves the measure
μ
p
.
{\displaystyle \mu _{p}.}
That is, it maps such that the measure on the unit interval is again the Lebesgue measure.
=== Frobenius–Perron operator ===
Denote the collection of all open sets on the Cantor set by
B
{\displaystyle {\mathcal {B}}}
and consider the set
F
{\displaystyle {\mathcal {F}}}
of all arbitrary functions
f
:
B
→
R
.
{\displaystyle f:{\mathcal {B}}\to \mathbb {R} .}
The shift
T
{\displaystyle T}
induces a pushforward
f
∘
T
−
1
{\displaystyle f\circ T^{-1}}
defined by
(
f
∘
T
−
1
)
(
x
)
=
f
(
T
−
1
(
x
)
)
.
{\displaystyle \left(f\circ T^{-1}\right)\!(x)=f(T^{-1}(x)).}
This is again some function
B
→
R
.
{\displaystyle {\mathcal {B}}\to \mathbb {R} .}
In this way, the map
T
{\displaystyle T}
induces another map
L
T
{\displaystyle {\mathcal {L}}_{T}}
on the space of all functions
B
→
R
.
{\displaystyle {\mathcal {B}}\to \mathbb {R} .}
That is, given some
f
:
B
→
R
{\displaystyle f:{\mathcal {B}}\to \mathbb {R} }
, one defines
L
T
f
=
f
∘
T
−
1
{\displaystyle {\mathcal {L}}_{T}f=f\circ T^{-1}}
This linear operator is called the transfer operator or the Ruelle–Frobenius–Perron operator. The largest eigenvalue is the Frobenius–Perron eigenvalue, and in this case, it is 1. The associated eigenvector is the invariant measure: in this case, it is the Bernoulli measure. Again,
L
T
(
ρ
)
=
ρ
{\displaystyle {\mathcal {L}}_{T}(\rho )=\rho }
when
ρ
(
x
)
=
1.
{\displaystyle \rho (x)=1.}
=== Spectrum ===
To obtain the spectrum of
L
T
{\displaystyle {\mathcal {L}}_{T}}
, one must provide a suitable set of basis functions for the space
F
.
{\displaystyle {\mathcal {F}}.}
One such choice is to restrict
F
{\displaystyle {\mathcal {F}}}
to the set of all polynomials. In this case, the operator has a discrete spectrum, and the eigenfunctions are (curiously) the Bernoulli polynomials! (This coincidence of naming was presumably not known to Bernoulli.)
Indeed, one can easily verify that
L
T
B
n
=
2
−
n
B
n
{\displaystyle {\mathcal {L}}_{T}B_{n}=2^{-n}B_{n}}
where the
B
n
{\displaystyle B_{n}}
are the Bernoulli polynomials. This follows because the Bernoulli polynomials obey the identity
1
2
B
n
(
y
2
)
+
1
2
B
n
(
y
+
1
2
)
=
2
−
n
B
n
(
y
)
{\displaystyle {\frac {1}{2}}B_{n}\!\left({\frac {y}{2}}\right)+{\frac {1}{2}}B_{n}\!\left({\frac {y+1}{2}}\right)=2^{-n}B_{n}(y)}
Note that
B
0
(
x
)
=
1.
{\displaystyle B_{0}(x)=1.}
Another basis is provided by the Haar basis, and the functions spanning the space are the Haar wavelets. In this case, one finds a continuous spectrum, consisting of the unit disk on the complex plane. Given
z
∈
C
{\displaystyle z\in \mathbb {C} }
in the unit disk, so that
|
z
|
<
1
{\displaystyle |z|<1}
, the functions
ψ
z
,
k
(
x
)
=
∑
n
=
1
∞
z
n
exp
i
π
(
2
k
+
1
)
2
n
x
{\displaystyle \psi _{z,k}(x)=\sum _{n=1}^{\infty }z^{n}\exp i\pi (2k+1)2^{n}x}
obey
L
T
ψ
z
,
k
=
z
ψ
z
,
k
{\displaystyle {\mathcal {L}}_{T}\psi _{z,k}=z\psi _{z,k}}
for
k
∈
Z
.
{\displaystyle k\in \mathbb {Z} .}
This is a complete basis, in that every integer can be written in the form
(
2
k
+
1
)
2
n
.
{\displaystyle (2k+1)2^{n}.}
The Bernoulli polynomials are recovered by setting
k
=
0
{\displaystyle k=0}
and
z
=
1
2
,
1
4
,
…
{\displaystyle z={\frac {1}{2}},{\frac {1}{4}},\dots }
A complete basis can be given in other ways, as well; they may be written in terms of the Hurwitz zeta function. Another complete basis is provided by the Takagi function. This is a fractal, differentiable-nowhere function. The eigenfunctions are explicitly of the form
blanc
w
,
k
(
x
)
=
∑
n
=
0
∞
w
n
s
(
(
2
k
+
1
)
2
n
x
)
{\displaystyle {\mbox{blanc}}_{w,k}(x)=\sum _{n=0}^{\infty }w^{n}s((2k+1)2^{n}x)}
where
s
(
x
)
{\displaystyle s(x)}
is the triangle wave. One has, again,
L
T
blanc
w
,
k
=
w
blanc
w
,
k
.
{\displaystyle {\mathcal {L}}_{T}{\mbox{blanc}}_{w,k}=w\;{\mbox{blanc}}_{w,k}.}
All of these different bases can be expressed as linear combinations of one-another. In this sense, they are equivalent.
The fractal eigenfunctions show an explicit symmetry under the fractal groupoid of the modular group; this is developed in greater detail in the article on the Takagi function (the blancmange curve). Perhaps not a surprise; the Cantor set has exactly the same set of symmetries (as do the continued fractions.) This then leads elegantly into the theory of elliptic equations and modular forms.
== Relation to the Ising model ==
The Hamiltonian of the zero-field one-dimensional Ising model of
2
N
{\displaystyle 2N}
spins with periodic boundary conditions can be written as
H
(
σ
)
=
g
∑
i
∈
Z
2
N
σ
i
σ
i
+
1
.
{\displaystyle H(\sigma )=g\sum _{i\in \mathbb {Z} _{2N}}\sigma _{i}\sigma _{i+1}.}
Letting
C
{\displaystyle C}
be a suitably chosen normalization constant and
β
{\displaystyle \beta }
be the inverse temperature for the system, the partition function for this model is given by
Z
=
∑
{
σ
i
=
±
1
,
i
∈
Z
2
N
}
∏
i
∈
Z
2
N
C
e
−
β
g
σ
i
σ
i
+
1
.
{\displaystyle Z=\sum _{\{\sigma _{i}=\pm 1,\,i\in \mathbb {Z} _{2N}\}}\prod _{i\in \mathbb {Z} _{2N}}Ce^{-\beta g\sigma _{i}\sigma _{i+1}}.}
We can implement the renormalization group by integrating out every other spin. In so doing, one finds that
Z
{\displaystyle Z}
can also be equated with the partition function for a smaller system with but
N
{\displaystyle N}
spins,
Z
=
∑
{
σ
i
=
±
1
,
i
∈
Z
N
}
∏
i
∈
Z
N
R
[
C
]
e
−
R
[
β
g
]
σ
i
σ
i
+
1
,
{\displaystyle Z=\sum _{\{\sigma _{i}=\pm 1,\,i\in \mathbb {Z} _{N}\}}\prod _{i\in \mathbb {Z} _{N}}{\mathcal {R}}[C]e^{-{\mathcal {R}}[\beta g]\sigma _{i}\sigma _{i+1}},}
provided we replace
C
{\displaystyle C}
and
β
g
{\displaystyle \beta g}
with renormalized values
R
[
C
]
{\displaystyle {\mathcal {R}}[C]}
and
R
[
β
g
]
{\displaystyle {\mathcal {R}}[\beta g]}
satisfying the equations
R
[
C
]
2
=
4
cosh
(
2
β
g
)
C
4
,
{\displaystyle {\mathcal {R}}[C]^{2}=4\cosh(2\beta g)C^{4},}
e
−
2
R
[
β
g
]
=
cosh
(
2
β
g
)
.
{\displaystyle e^{-2{\mathcal {R}}[\beta g]}=\cosh(2\beta g).}
Suppose now that we allow
β
g
{\displaystyle \beta g}
to be complex and that
Im
[
2
β
g
]
=
π
2
+
π
n
{\displaystyle \operatorname {Im} [2\beta g]={\frac {\pi }{2}}+\pi n}
for some
n
∈
Z
{\displaystyle n\in \mathbb {Z} }
. In that case we can introduce a parameter
t
∈
[
0
,
1
)
{\displaystyle t\in [0,1)}
related to
β
g
{\displaystyle \beta g}
via the equation
e
−
2
β
g
=
i
tan
(
π
(
t
−
1
2
)
)
,
{\displaystyle e^{-2\beta g}=i\tan {\big (}\pi (t-{\frac {1}{2}}){\big )},}
and the resulting renormalization group transformation for
t
{\displaystyle t}
will be precisely the dyadic map:
R
[
t
]
=
2
t
mod
1
.
{\displaystyle {\mathcal {R}}[t]=2t{\bmod {1}}.}
== See also ==
Bernoulli process
Bernoulli scheme
Gilbert–Shannon–Reeds model, a random distribution on permutations given by applying the doubling map to a set of n uniformly random points on the unit interval
== Notes ==
== References == | Wikipedia/Dyadic_transformation |
Chaos theory is a mathematical theory describing erratic behavior in certain nonlinear dynamical systems.
Chaos Theory may also refer to:
== Film and television ==
Chaos Theory (film), a 2008 comedy-drama
"Chaos Theory", an episode of ER (season 9)
"Chaos Theory", an episode of CSI: Crime Scene Investigation (season 2)
"Chaos Theory", an episode of The Unit (season 4)
"Chaos Theory" (Agents of S.H.I.E.L.D.)
Jurassic World: Chaos Theory, a Netflix-exclusive show in the Jurassic Park franchise
== Other uses ==
Chaos Theory (demo), a 2006 computer demo by Conspiracy
"Chaos Theory", an episode of video game Life Is Strange
"Chaos Theory", a name for a type of German suplex in pro wrestling
Chaos Theory: Part 1, a 2012 EP by Like A Storm
The Chaos Theory, a 2002 album by Jumpsteady
Tom Clancy's Splinter Cell: Chaos Theory, a 2005 video game
Chaos Theory – The Soundtrack to Tom Clancy's Splinter Cell: Chaos Theory, by Amon Tobin, 2005
== See also ==
All pages with titles beginning with Chaos Theory
All pages with titles containing Chaos Theory | Wikipedia/Chaos_theory_(disambiguation) |
In mathematics, an interval exchange transformation is a kind of dynamical system that generalises circle rotation. The phase space consists of the unit interval, and the transformation acts by cutting the interval into several subintervals, and then permuting these subintervals. They arise naturally in the study of polygonal billiards and in area-preserving flows.
== Formal definition ==
Let
n
>
0
{\displaystyle n>0}
and let
π
{\displaystyle \pi }
be a permutation on
1
,
…
,
n
{\displaystyle 1,\dots ,n}
. Consider a vector
λ
=
(
λ
1
,
…
,
λ
n
)
{\displaystyle \lambda =(\lambda _{1},\dots ,\lambda _{n})}
of positive real numbers (the widths of the subintervals), satisfying
∑
i
=
1
n
λ
i
=
1.
{\displaystyle \sum _{i=1}^{n}\lambda _{i}=1.}
Define a map
T
π
,
λ
:
[
0
,
1
]
→
[
0
,
1
]
,
{\displaystyle T_{\pi ,\lambda }:[0,1]\rightarrow [0,1],}
called the interval exchange transformation associated with the pair
(
π
,
λ
)
{\displaystyle (\pi ,\lambda )}
as follows. For
1
≤
i
≤
n
{\displaystyle 1\leq i\leq n}
let
a
i
=
∑
1
≤
j
<
i
λ
j
and
a
i
′
=
∑
1
≤
j
<
π
(
i
)
λ
π
−
1
(
j
)
.
{\displaystyle a_{i}=\sum _{1\leq j<i}\lambda _{j}\quad {\text{and}}\quad a'_{i}=\sum _{1\leq j<\pi (i)}\lambda _{\pi ^{-1}(j)}.}
Then for
x
∈
[
0
,
1
]
{\displaystyle x\in [0,1]}
, define
T
π
,
λ
(
x
)
=
x
−
a
i
+
a
i
′
{\displaystyle T_{\pi ,\lambda }(x)=x-a_{i}+a'_{i}}
if
x
{\displaystyle x}
lies in the subinterval
[
a
i
,
a
i
+
λ
i
)
{\displaystyle [a_{i},a_{i}+\lambda _{i})}
. Thus
T
π
,
λ
{\displaystyle T_{\pi ,\lambda }}
acts on each subinterval of the form
[
a
i
,
a
i
+
λ
i
)
{\displaystyle [a_{i},a_{i}+\lambda _{i})}
by a translation, and it rearranges these subintervals so that the subinterval at position
i
{\displaystyle i}
is moved to position
π
(
i
)
{\displaystyle \pi (i)}
.
== Properties ==
Any interval exchange transformation
T
π
,
λ
{\displaystyle T_{\pi ,\lambda }}
is a bijection of
[
0
,
1
]
{\displaystyle [0,1]}
to itself that preserves the Lebesgue measure. It is continuous except at a finite number of points.
The inverse of the interval exchange transformation
T
π
,
λ
{\displaystyle T_{\pi ,\lambda }}
is again an interval exchange transformation. In fact, it is the transformation
T
π
−
1
,
λ
′
{\displaystyle T_{\pi ^{-1},\lambda '}}
where
λ
i
′
=
λ
π
−
1
(
i
)
{\displaystyle \lambda '_{i}=\lambda _{\pi ^{-1}(i)}}
for all
1
≤
i
≤
n
{\displaystyle 1\leq i\leq n}
.
If
n
=
2
{\displaystyle n=2}
and
π
=
(
12
)
{\displaystyle \pi =(12)}
(in cycle notation), and if we join up the ends of the interval to make a circle, then
T
π
,
λ
{\displaystyle T_{\pi ,\lambda }}
is just a circle rotation. The Weyl equidistribution theorem then asserts that if the length
λ
1
{\displaystyle \lambda _{1}}
is irrational, then
T
π
,
λ
{\displaystyle T_{\pi ,\lambda }}
is uniquely ergodic. Roughly speaking, this means that the orbits of points of
[
0
,
1
]
{\displaystyle [0,1]}
are uniformly evenly distributed. On the other hand, if
λ
1
{\displaystyle \lambda _{1}}
is rational then each point of the interval is periodic, and the period is the denominator of
λ
1
{\displaystyle \lambda _{1}}
(written in lowest terms).
If
n
>
2
{\displaystyle n>2}
, and provided
π
{\displaystyle \pi }
satisfies certain non-degeneracy conditions (namely there is no integer
0
<
k
<
n
{\displaystyle 0<k<n}
such that
π
(
{
1
,
…
,
k
}
)
=
{
1
,
…
,
k
}
{\displaystyle \pi (\{1,\dots ,k\})=\{1,\dots ,k\}}
), a deep theorem which was a conjecture of M.Keane and due independently to William A. Veech and to Howard Masur asserts that for almost all choices of
λ
{\displaystyle \lambda }
in the unit simplex
{
(
t
1
,
…
,
t
n
)
:
∑
t
i
=
1
}
{\displaystyle \{(t_{1},\dots ,t_{n}):\sum t_{i}=1\}}
the interval exchange transformation
T
π
,
λ
{\displaystyle T_{\pi ,\lambda }}
is again uniquely ergodic. However, for
n
≥
4
{\displaystyle n\geq 4}
there also exist choices of
(
π
,
λ
)
{\displaystyle (\pi ,\lambda )}
so that
T
π
,
λ
{\displaystyle T_{\pi ,\lambda }}
is ergodic but not uniquely ergodic. Even in these cases, the number of ergodic invariant measures of
T
π
,
λ
{\displaystyle T_{\pi ,\lambda }}
is finite, and is at most
n
{\displaystyle n}
.
Interval maps have a topological entropy of zero.
== Odometers ==
The dyadic odometer can be understood as an interval exchange transformation of a countable number of intervals. The dyadic odometer is most easily written as the transformation
T
(
1
,
…
,
1
,
0
,
b
k
+
1
,
b
k
+
2
,
…
)
=
(
0
,
…
,
0
,
1
,
b
k
+
1
,
b
k
+
2
,
…
)
{\displaystyle T\left(1,\dots ,1,0,b_{k+1},b_{k+2},\dots \right)=\left(0,\dots ,0,1,b_{k+1},b_{k+2},\dots \right)}
defined on the Cantor space
{
0
,
1
}
N
.
{\displaystyle \{0,1\}^{\mathbb {N} }.}
The standard mapping from Cantor space into the unit interval is given by
(
b
0
,
b
1
,
b
2
,
⋯
)
↦
x
=
∑
n
=
0
∞
b
n
2
−
n
−
1
{\displaystyle (b_{0},b_{1},b_{2},\cdots )\mapsto x=\sum _{n=0}^{\infty }b_{n}2^{-n-1}}
This mapping is a measure-preserving homomorphism from the Cantor set to the unit interval, in that it maps the standard Bernoulli measure on the Cantor set to the Lebesgue measure on the unit interval. A visualization of the odometer and its first three iterates appear on the right.
== Higher dimensions ==
Two and higher-dimensional generalizations include polygon exchanges, polyhedral exchanges and piecewise isometries.
== See also ==
Odometer
== Notes ==
== References ==
Artur Avila and Giovanni Forni, Weak mixing for interval exchange transformations and translation flows, arXiv:math/0406326v1, https://arxiv.org/abs/math.DS/0406326 | Wikipedia/Interval_exchange_transformation |
A neuron (American English), neurone (British English), or nerve cell, is an excitable cell that fires electric signals called action potentials across a neural network in the nervous system. They are located in the nervous system and help to receive and conduct impulses. Neurons communicate with other cells via synapses, which are specialized connections that commonly use minute amounts of chemical neurotransmitters to pass the electric signal from the presynaptic neuron to the target cell through the synaptic gap.
Neurons are the main components of nervous tissue in all animals except sponges and placozoans. Plants and fungi do not have nerve cells. Molecular evidence suggests that the ability to generate electric signals first appeared in evolution some 700 to 800 million years ago, during the Tonian period. Predecessors of neurons were the peptidergic secretory cells. They eventually gained new gene modules which enabled cells to create post-synaptic scaffolds and ion channels that generate fast electrical signals. The ability to generate electric signals was a key innovation in the evolution of the nervous system.
Neurons are typically classified into three types based on their function. Sensory neurons respond to stimuli such as touch, sound, or light that affect the cells of the sensory organs, and they send signals to the spinal cord or brain. Motor neurons receive signals from the brain and spinal cord to control everything from muscle contractions to glandular output. Interneurons connect neurons to other neurons within the same region of the brain or spinal cord. When multiple neurons are functionally connected together, they form what is called a neural circuit.
A neuron contains all the structures of other cells such as a nucleus, mitochondria, and Golgi bodies but has additional unique structures such as an axon, and dendrites. The soma is a compact structure, and the axon and dendrites are filaments extruding from the soma. Dendrites typically branch profusely and extend a few hundred micrometers from the soma. The axon leaves the soma at a swelling called the axon hillock and travels for as far as 1 meter in humans or more in other species. It branches but usually maintains a constant diameter. At the farthest tip of the axon's branches are axon terminals, where the neuron can transmit a signal across the synapse to another cell. Neurons may lack dendrites or have no axons. The term neurite is used to describe either a dendrite or an axon, particularly when the cell is undifferentiated.
Most neurons receive signals via the dendrites and soma and send out signals down the axon. At the majority of synapses, signals cross from the axon of one neuron to the dendrite of another. However, synapses can connect an axon to another axon or a dendrite to another dendrite. The signaling process is partly electrical and partly chemical. Neurons are electrically excitable, due to the maintenance of voltage gradients across their membranes. If the voltage changes by a large enough amount over a short interval, the neuron generates an all-or-nothing electrochemical pulse called an action potential. This potential travels rapidly along the axon and activates synaptic connections as it reaches them. Synaptic signals may be excitatory or inhibitory, increasing or reducing the net voltage that reaches the soma.
In most cases, neurons are generated by neural stem cells during brain development and childhood. Neurogenesis largely ceases during adulthood in most areas of the brain.
== Nervous system ==
Neurons are the primary components of the nervous system, along with the glial cells that give them structural and metabolic support. The nervous system is made up of the central nervous system, which includes the brain and spinal cord, and the peripheral nervous system, which includes the autonomic, enteric and somatic nervous systems. In vertebrates, the majority of neurons belong to the central nervous system, but some reside in peripheral ganglia, and many sensory neurons are situated in sensory organs such as the retina and cochlea.
Axons may bundle into nerve fascicles that make up the nerves in the peripheral nervous system (like strands of wire that make up a cable). In the central nervous system bundles of axons are called nerve tracts.
== Anatomy and histology ==
Neurons are highly specialized for the processing and transmission of cellular signals. Given the diversity of functions performed in different parts of the nervous system, there is a wide variety in their shape, size, and electrochemical properties. For instance, the soma of a neuron can vary from 4 to 100 micrometers in diameter.
The soma is the body of the neuron. As it contains the nucleus, most protein synthesis occurs here. The nucleus can range from 3 to 18 micrometers in diameter.
The dendrites of a neuron are cellular extensions with many branches. This overall shape and structure are referred to metaphorically as a dendritic tree. This is where the majority of input to the neuron occurs via the dendritic spine.
The axon is a finer, cable-like projection that can extend tens, hundreds, or even tens of thousands of times the diameter of the soma in length. The axon primarily carries nerve signals away from the soma and carries some types of information back to it. Many neurons have only one axon, but this axon may—and usually will—undergo extensive branching, enabling communication with many target cells. The part of the axon where it emerges from the soma is called the axon hillock. Besides being an anatomical structure, the axon hillock also has the greatest density of voltage-dependent sodium channels. This makes it the most easily excited part of the neuron and the spike initiation zone for the axon. In electrophysiological terms, it has the most negative threshold potential.
While the axon and axon hillock are generally involved in information outflow, this region can also receive input from other neurons.
The axon terminal is found at the end of the axon farthest from the soma and contains synapses. Synaptic boutons are specialized structures where neurotransmitter chemicals are released to communicate with target neurons. In addition to synaptic boutons at the axon terminal, a neuron may have en passant boutons, which are located along the length of the axon.
The accepted view of the neuron attributes dedicated functions to its various anatomical components; however, dendrites and axons often act in ways contrary to their so-called main function.
Axons and dendrites in the central nervous system are typically only about one micrometer thick, while some in the peripheral nervous system are much thicker. The soma is usually about 10–25 micrometers in diameter and often is not much larger than the cell nucleus it contains. The longest axon of a human motor neuron can be over a meter long, reaching from the base of the spine to the toes.
Sensory neurons can have axons that run from the toes to the posterior column of the spinal cord, over 1.5 meters in adults. Giraffes have single axons several meters in length running along the entire length of their necks. Much of what is known about axonal function comes from studying the squid giant axon, an ideal experimental preparation because of its relatively immense size (0.5–1 millimeter thick, several centimeters long).
Fully differentiated neurons are permanently postmitotic however, stem cells present in the adult brain may regenerate functional neurons throughout the life of an organism (see neurogenesis). Astrocytes are star-shaped glial cells that have been observed to turn into neurons by virtue of their stem cell-like characteristic of pluripotency.
=== Membrane ===
Like all animal cells, the cell body of every neuron is enclosed by a plasma membrane, a bilayer of lipid molecules with many types of protein structures embedded in it. A lipid bilayer is a powerful electrical insulator, but in neurons, many of the protein structures embedded in the membrane are electrically active. These include ion channels that permit electrically charged ions to flow across the membrane and ion pumps that chemically transport ions from one side of the membrane to the other. Most ion channels are permeable only to specific types of ions. Some ion channels are voltage gated, meaning that they can be switched between open and closed states by altering the voltage difference across the membrane. Others are chemically gated, meaning that they can be switched between open and closed states by interactions with chemicals that diffuse through the extracellular fluid. The ion materials include sodium, potassium, chloride, and calcium. The interactions between ion channels and ion pumps produce a voltage difference across the membrane, typically a bit less than 1/10 of a volt at baseline. This voltage has two functions: first, it provides a power source for an assortment of voltage-dependent protein machinery that is embedded in the membrane; second, it provides a basis for electrical signal transmission between different parts of the membrane.
=== Histology and internal structure ===
Numerous microscopic clumps called Nissl bodies (or Nissl substance) are seen when nerve cell bodies are stained with a basophilic ("base-loving") dye. These structures consist of rough endoplasmic reticulum and associated ribosomal RNA. Named after German psychiatrist and neuropathologist Franz Nissl (1860–1919), they are involved in protein synthesis and their prominence can be explained by the fact that nerve cells are very metabolically active. Basophilic dyes such as aniline or (weakly) hematoxylin highlight negatively charged components, and so bind to the phosphate backbone of the ribosomal RNA.
The cell body of a neuron is supported by a complex mesh of structural proteins called neurofilaments, which together with neurotubules (neuronal microtubules) are assembled into larger neurofibrils. Some neurons also contain pigment granules, such as neuromelanin (a brownish-black pigment that is byproduct of synthesis of catecholamines), and lipofuscin (a yellowish-brown pigment), both of which accumulate with age. Other structural proteins that are important for neuronal function are actin and the tubulin of microtubules. Class III β-tubulin is found almost exclusively in neurons. Actin is predominately found at the tips of axons and dendrites during neuronal development. There the actin dynamics can be modulated via an interplay with microtubule.
There are different internal structural characteristics between axons and dendrites. Typical axons seldom contain ribosomes, except some in the initial segment. Dendrites contain granular endoplasmic reticulum or ribosomes, in diminishing amounts as the distance from the cell body increases.
== Classification ==
Neurons vary in shape and size and can be classified by their morphology and function. The anatomist Camillo Golgi grouped neurons into two types; type I with long axons used to move signals over long distances and type II with short axons, which can often be confused with dendrites. Type I cells can be further classified by the location of the soma. The basic morphology of type I neurons, represented by spinal motor neurons, consists of a cell body called the soma and a long thin axon covered by a myelin sheath. The dendritic tree wraps around the cell body and receives signals from other neurons. The end of the axon has branching axon terminals that release neurotransmitters into a gap called the synaptic cleft between the terminals and the dendrites of the next neuron.
=== Structural classification ===
==== Polarity ====
Most neurons can be anatomically characterized as:
Unipolar: single process. Unipolar cells are exclusively sensory neurons. Their dendrites receive sensory information, sometimes directly from the stimulus itself. The cell bodies of unipolar neurons are always found in ganglia. Sensory reception is a peripheral function, so the cell body is in the periphery, though closer to the CNS in a ganglion. The axon projects from the dendrite endings, past the cell body in a ganglion, and into the central nervous system.
Bipolar: 1 axon and 1 dendrite. They are found mainly in the olfactory epithelium, and as part of the retina.
Multipolar: 1 axon and 2 or more dendrites
Golgi I: neurons with long-projecting axonal processes; examples are pyramidal cells, Purkinje cells, and anterior horn cells
Golgi II: neurons whose axonal process projects locally; the best example is the granule cell
Anaxonic: where the axon cannot be distinguished from the dendrite(s)
Pseudounipolar: 1 process which then serves as both an axon and a dendrite
==== Other ====
Some unique neuronal types can be identified according to their location in the nervous system and distinct shape. Some examples are:
Basket cells, interneurons that form a dense plexus of terminals around the soma of target cells, found in the cortex and cerebellum
Betz cells, large motor neurons in primary motor cortex
Lugaro cells, interneurons of the cerebellum
Medium spiny neurons, most neurons in the corpus striatum
Purkinje cells, huge neurons in the cerebellum, a type of Golgi I multipolar neuron
Pyramidal cells, neurons with triangular soma, a type of Golgi I
Rosehip cells, unique human inhibitory neurons that interconnect with Pyramidal cells
Renshaw cells, neurons with both ends linked to alpha motor neurons
Unipolar brush cells, interneurons with unique dendrite ending in a brush-like tuft
Granule cells, a type of Golgi II neuron
Anterior horn cells, motoneurons located in the spinal cord
Spindle cells, interneurons that connect widely separated areas of the brain
=== Functional classification ===
==== Direction ====
Afferent neurons convey information from tissues and organs into the central nervous system and are also called sensory neurons.
Efferent neurons (motor neurons) transmit signals from the central nervous system to the effector cells.
Interneurons connect neurons within specific regions of the central nervous system.
Afferent and efferent also refer generally to neurons that, respectively, bring information to or send information from the brain.
==== Action on other neurons ====
A neuron affects other neurons by releasing a neurotransmitter that binds to chemical receptors. The effect on the postsynaptic neuron is determined by the type of receptor that is activated, not by the presynaptic neuron or by the neurotransmitter. Receptors are classified broadly as excitatory (causing an increase in firing rate), inhibitory (causing a decrease in firing rate), or modulatory (causing long-lasting effects not directly related to firing rate).
The two most common (90%+) neurotransmitters in the brain, glutamate and GABA, have largely consistent actions. Glutamate acts on several types of receptors and has effects that are excitatory at ionotropic receptors and a modulatory effect at metabotropic receptors. Similarly, GABA acts on several types of receptors, but all of them have inhibitory effects (in adult animals, at least). Because of this consistency, it is common for neuroscientists to refer to cells that release glutamate as "excitatory neurons", and cells that release GABA as "inhibitory neurons". Some other types of neurons have consistent effects, for example, "excitatory" motor neurons in the spinal cord that release acetylcholine, and "inhibitory" spinal neurons that release glycine.
The distinction between excitatory and inhibitory neurotransmitters is not absolute. Rather, it depends on the class of chemical receptors present on the postsynaptic neuron. In principle, a single neuron, releasing a single neurotransmitter, can have excitatory effects on some targets, inhibitory effects on others, and modulatory effects on others still. For example, photoreceptor cells in the retina constantly release the neurotransmitter glutamate in the absence of light. So-called OFF bipolar cells are, like most neurons, excited by the released glutamate. However, neighboring target neurons called ON bipolar cells are instead inhibited by glutamate, because they lack typical ionotropic glutamate receptors and instead express a class of inhibitory metabotropic glutamate receptors. When light is present, the photoreceptors cease releasing glutamate, which relieves the ON bipolar cells from inhibition, activating them; this simultaneously removes the excitation from the OFF bipolar cells, silencing them.
It is possible to identify the type of inhibitory effect a presynaptic neuron will have on a postsynaptic neuron, based on the proteins the presynaptic neuron expresses. Parvalbumin-expressing neurons typically dampen the output signal of the postsynaptic neuron in the visual cortex, whereas somatostatin-expressing neurons typically block dendritic inputs to the postsynaptic neuron.
==== Discharge patterns ====
Neurons have intrinsic electroresponsive properties like intrinsic transmembrane voltage oscillatory patterns. So neurons can be classified according to their electrophysiological characteristics:
Tonic or regular spiking. Some neurons are typically constantly (tonically) active, typically firing at a constant frequency. Example: interneurons in neurostriatum.
Phasic or bursting. Neurons that fire in bursts are called phasic.
Fast-spiking. Some neurons are notable for their high firing rates, for example, some types of cortical inhibitory interneurons, cells in globus pallidus, retinal ganglion cells.
==== Neurotransmitter ====
Neurotransmitters are chemical messengers passed from one neuron to another neuron or to a muscle cell or gland cell.
Cholinergic neurons – acetylcholine. Acetylcholine is released from presynaptic neurons into the synaptic cleft. It acts as a ligand for both ligand-gated ion channels and metabotropic (GPCRs) muscarinic receptors. Nicotinic receptors are pentameric ligand-gated ion channels composed of alpha and beta subunits that bind nicotine. Ligand binding opens the channel causing the influx of Na+ depolarization and increases the probability of presynaptic neurotransmitter release. Acetylcholine is synthesized from choline and acetyl coenzyme A.
Adrenergic neurons – noradrenaline. Noradrenaline (norepinephrine) is released from most postganglionic neurons in the sympathetic nervous system onto two sets of GPCRs: alpha adrenoceptors and beta adrenoceptors. Noradrenaline is one of the three common catecholamine neurotransmitters, and the most prevalent of them in the peripheral nervous system; as with other catecholamines, it is synthesized from tyrosine.
GABAergic neurons – gamma aminobutyric acid. GABA is one of two neuroinhibitors in the central nervous system (CNS), along with glycine. GABA has a homologous function to ACh, gating anion channels that allow Cl− ions to enter the post synaptic neuron. Cl− causes hyperpolarization within the neuron, decreasing the probability of an action potential firing as the voltage becomes more negative (for an action potential to fire, a positive voltage threshold must be reached). GABA is synthesized from glutamate neurotransmitters by the enzyme glutamate decarboxylase.
Glutamatergic neurons – glutamate. Glutamate is one of two primary excitatory amino acid neurotransmitters, along with aspartate. Glutamate receptors are one of four categories, three of which are ligand-gated ion channels and one of which is a G-protein coupled receptor (often referred to as GPCR).
AMPA and Kainate receptors function as cation channels permeable to Na+ cation channels mediating fast excitatory synaptic transmission.
NMDA receptors are another cation channel that is more permeable to Ca2+. The function of NMDA receptors depends on glycine receptor binding as a co-agonist within the channel pore. NMDA receptors do not function without both ligands present.
Metabotropic receptors, GPCRs modulate synaptic transmission and postsynaptic excitability.
Glutamate can cause excitotoxicity when blood flow to the brain is interrupted, resulting in brain damage. When blood flow is suppressed, glutamate is released from presynaptic neurons, causing greater NMDA and AMPA receptor activation than normal outside of stress conditions, leading to elevated Ca2+ and Na+ entering the post synaptic neuron and cell damage. Glutamate is synthesized from the amino acid glutamine by the enzyme glutamate synthase.
Dopaminergic neurons—dopamine. Dopamine is a neurotransmitter that acts on D1 type (D1 and D5) Gs-coupled receptors, which increase cAMP and PKA, and D2 type (D2, D3, and D4) receptors, which activate Gi-coupled receptors that decrease cAMP and PKA. Dopamine is connected to mood and behavior and modulates both pre- and post-synaptic neurotransmission. Loss of dopamine neurons in the substantia nigra has been linked to Parkinson's disease. Dopamine is synthesized from the amino acid tyrosine. Tyrosine is catalyzed into levodopa (or L-DOPA) by tyrosine hydroxylase, and levodopa is then converted into dopamine by the aromatic amino acid decarboxylase.
Serotonergic neurons—serotonin. Serotonin (5-Hydroxytryptamine, 5-HT) can act as excitatory or inhibitory. Of its four 5-HT receptor classes, 3 are GPCR and 1 is a ligand-gated cation channel. Serotonin is synthesized from tryptophan by tryptophan hydroxylase, and then further by decarboxylase. A lack of 5-HT at postsynaptic neurons has been linked to depression. Drugs that block the presynaptic serotonin transporter are used for treatment, such as Prozac and Zoloft.
Purinergic neurons—ATP. ATP is a neurotransmitter acting at both ligand-gated ion channels (P2X receptors) and GPCRs (P2Y) receptors. ATP is, however, best known as a cotransmitter. Such purinergic signaling can also be mediated by other purines like adenosine, which particularly acts at P2Y receptors.
Histaminergic neurons—histamine. Histamine is a monoamine neurotransmitter and neuromodulator. Histamine-producing neurons are found in the tuberomammillary nucleus of the hypothalamus. Histamine is involved in arousal and regulating sleep/wake behaviors.
==== Multimodel classification ====
Since 2012 there has been a push from the cellular and computational neuroscience community to come up with a universal classification of neurons that will apply to all neurons in the brain as well as across species. This is done by considering the three essential qualities of all neurons: electrophysiology, morphology, and the individual transcriptome of the cells. Besides being universal this classification has the advantage of being able to classify astrocytes as well. A method called patch-sequencing in which all three qualities can be measured at once is used extensively by the Allen Institute for Brain Science. In 2023, a comprehensive cell atlas of the adult, and developing human brain at the transcriptional, epigenetic, and functional levels was created through an international collaboration of researchers using the most cutting-edge molecular biology approaches.
== Connectivity ==
Neurons communicate with each other via synapses, where either the axon terminal of one cell contacts another neuron's dendrite, soma, or, less commonly, axon. Neurons such as Purkinje cells in the cerebellum can have over 1000 dendritic branches, making connections with tens of thousands of other cells; other neurons, such as the magnocellular neurons of the supraoptic nucleus, have only one or two dendrites, each of which receives thousands of synapses.
Synapses can be excitatory or inhibitory, either increasing or decreasing activity in the target neuron, respectively. Some neurons also communicate via electrical synapses, which are direct, electrically conductive junctions between cells.
When an action potential reaches the axon terminal, it opens voltage-gated calcium channels, allowing calcium ions to enter the terminal. Calcium causes synaptic vesicles filled with neurotransmitter molecules to fuse with the membrane, releasing their contents into the synaptic cleft. The neurotransmitters diffuse across the synaptic cleft and activate receptors on the postsynaptic neuron. High cytosolic calcium in the axon terminal triggers mitochondrial calcium uptake, which, in turn, activates mitochondrial energy metabolism to produce ATP to support continuous neurotransmission.
An autapse is a synapse in which a neuron's axon connects to its dendrites.
The human brain has some 8.6 x 1010 (eighty six billion) neurons. Each neuron has on average 7,000 synaptic connections to other neurons. It has been estimated that the brain of a three-year-old child has about 1015 synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 1014 to 5 x 1014 synapses (100 to 500 trillion).
=== Nonelectrochemical signaling ===
Beyond electrical and chemical signaling, studies suggest neurons in healthy human brains can also communicate through:
force generated by the enlargement of dendritic spines
the transfer of proteins – transneuronally transported proteins (TNTPs)
They can also get modulated by input from the environment and hormones released from other parts of the organism, which could be influenced more or less directly by neurons. This also applies to neurotrophins such as BDNF. The gut microbiome is also connected with the brain.
Neurons also communicate with microglia, the brain's main immune cells via specialized contact sites, called "somatic junctions". These connections enable microglia to constantly monitor and regulate neuronal functions, and exert neuroprotection when needed.
== Mechanisms for propagating action potentials ==
In 1937 John Zachary Young suggested that the squid giant axon could be used to study neuronal electrical properties. It is larger than but similar to human neurons, making it easier to study. By inserting electrodes into the squid giant axons, accurate measurements were made of the membrane potential.
The cell membrane of the axon and soma contain voltage-gated ion channels that allow the neuron to generate and propagate an electrical signal (an action potential). Some neurons also generate subthreshold membrane potential oscillations. These signals are generated and propagated by charge-carrying ions including sodium (Na+), potassium (K+), chloride (Cl−), and calcium (Ca2+).
Several stimuli can activate a neuron leading to electrical activity, including pressure, stretch, chemical transmitters, and changes in the electric potential across the cell membrane. Stimuli cause specific ion-channels within the cell membrane to open, leading to a flow of ions through the cell membrane, changing the membrane potential. Neurons must maintain the specific electrical properties that define their neuron type.
Thin neurons and axons require less metabolic expense to produce and carry action potentials, but thicker axons convey impulses more rapidly. To minimize metabolic expense while maintaining rapid conduction, many neurons have insulating sheaths of myelin around their axons. The sheaths are formed by glial cells: oligodendrocytes in the central nervous system and Schwann cells in the peripheral nervous system. The sheath enables action potentials to travel faster than in unmyelinated axons of the same diameter, whilst using less energy. The myelin sheath in peripheral nerves normally runs along the axon in sections about 1 mm long, punctuated by unsheathed nodes of Ranvier, which contain a high density of voltage-gated ion channels. Multiple sclerosis is a neurological disorder that results from the demyelination of axons in the central nervous system.
Some neurons do not generate action potentials but instead generate a graded electrical signal, which in turn causes graded neurotransmitter release. Such non-spiking neurons tend to be sensory neurons or interneurons, because they cannot carry signals long distances.
== Neural coding ==
Neural coding is concerned with how sensory and other information is represented in the brain by neurons. The main goal of studying neural coding is to characterize the relationship between the stimulus and the individual or ensemble neuronal responses and the relationships among the electrical activities of the neurons within the ensemble. It is thought that neurons can encode both digital and analog information.
== All-or-none principle ==
The conduction of nerve impulses is an example of an all-or-none response. In other words, if a neuron responds at all, then it must respond completely. Greater intensity of stimulation, like brighter image/louder sound, does not produce a stronger signal but can increase firing frequency.: 31 Receptors respond in different ways to stimuli. Slowly adapting or tonic receptors respond to a steady stimulus and produce a steady rate of firing. Tonic receptors most often respond to increased stimulus intensity by increasing their firing frequency, usually as a power function of stimulus plotted against impulses per second. This can be likened to an intrinsic property of light where greater intensity of a specific frequency (color) requires more photons, as the photons can not become "stronger" for a specific frequency.
Other receptor types include quickly adapting or phasic receptors, where firing decreases or stops with a steady stimulus; examples include skin which, when touched causes neurons to fire, but if the object maintains even pressure, the neurons stop firing. The neurons of the skin and muscles that are responsive to pressure and vibration have filtering accessory structures that aid their function.
The pacinian corpuscle is one such structure. It has concentric layers like an onion, which form around the axon terminal. When pressure is applied and the corpuscle is deformed, mechanical stimulus is transferred to the axon, which fires. If the pressure is steady, the stimulus ends; thus, these neurons typically respond with a transient depolarization during the initial deformation and again when the pressure is removed, which causes the corpuscle to change shape again. Other types of adaptation are important in extending the function of several other neurons.
== Etymology and spelling ==
The German anatomist Heinrich Wilhelm Waldeyer introduced the term neuron in 1891, based on the ancient Greek νεῦρον neuron 'sinew, cord, nerve'.
The word was adopted in French with the spelling neurone. That spelling was also used by many writers in English, but has now become rare in American usage and uncommon in British usage.
Some previous works used nerve cell (cellule nervose), as adopted in Camillo Golgi's 1873 paper on the discovery of the silver staining technique used to visualize nervous tissue under light microscopy.
== History ==
The neuron's place as the primary functional unit of the nervous system was first recognized in the late 19th century through the work of the Spanish anatomist Santiago Ramón y Cajal.
To make the structure of individual neurons visible, Ramón y Cajal improved a silver staining process that had been developed by Camillo Golgi. The improved process involves a technique called "double impregnation" and is still in use.
In 1888 Ramón y Cajal published a paper about the bird cerebellum. In this paper, he stated that he could not find evidence for anastomosis between axons and dendrites and called each nervous element "an autonomous canton." This became known as the neuron doctrine, one of the central tenets of modern neuroscience.
In 1891, the German anatomist Heinrich Wilhelm Waldeyer wrote a highly influential review of the neuron doctrine in which he introduced the term neuron to describe the anatomical and physiological unit of the nervous system.
The silver impregnation stains are a useful method for neuroanatomical investigations because, for reasons unknown, it stains only a small percentage of cells in a tissue, exposing the complete micro structure of individual neurons without much overlap from other cells.
=== Neuron doctrine ===
The neuron doctrine is the now fundamental idea that neurons are the basic structural and functional units of the nervous system. The theory was put forward by Santiago Ramón y Cajal in the late 19th century. It held that neurons are discrete cells (not connected in a meshwork), acting as metabolically distinct units.
Later discoveries yielded refinements to the doctrine. For example, glial cells, which are non-neuronal, play an essential role in information processing. Also, electrical synapses are more common than previously thought, comprising direct, cytoplasmic connections between neurons; In fact, neurons can form even tighter couplings: the squid giant axon arises from the fusion of multiple axons.
Ramón y Cajal also postulated the Law of Dynamic Polarization, which states that a neuron receives signals at its dendrites and cell body and transmits them, as action potentials, along the axon in one direction: away from the cell body. The Law of Dynamic Polarization has important exceptions; dendrites can serve as synaptic output sites of neurons and axons can receive synaptic inputs.
=== Compartmental modelling of neurons ===
Although neurons are often described as "fundamental units" of the brain, they perform internal computations. Neurons integrate input within dendrites, and this complexity is lost in models that assume neurons to be a fundamental unit. Dendritic branches can be modeled as spatial compartments, whose activity is related to passive membrane properties, but may also be different depending on input from synapses. Compartmental modelling of dendrites is especially helpful for understanding the behavior of neurons that are too small to record with electrodes, as is the case for Drosophila melanogaster.
== Neurons in the brain ==
The number of neurons in the brain varies dramatically from species to species. In a human, there are an estimated 10–20 billion neurons in the cerebral cortex and 55–70 billion neurons in the cerebellum. By contrast, the nematode worm Caenorhabditis elegans has just 302 neurons, making it an ideal model organism as scientists have been able to map all of its neurons. The fruit fly Drosophila melanogaster, a common subject in biological experiments, has around 100,000 neurons and exhibits many complex behaviors. Many properties of neurons, from the type of neurotransmitters used to ion channel composition, are maintained across species, allowing scientists to study processes occurring in more complex organisms in much simpler experimental systems.
== Neurological disorders ==
Charcot–Marie–Tooth disease (CMT) is a heterogeneous inherited disorder of nerves (neuropathy) that is characterized by loss of muscle tissue and touch sensation, predominantly in the feet and legs extending to the hands and arms in advanced stages. Presently incurable, this disease is one of the most common inherited neurological disorders, affecting 36 in 100,000 people.
Alzheimer's disease (AD), also known simply as Alzheimer's, is a neurodegenerative disease characterized by progressive cognitive deterioration, together with declining activities of daily living and neuropsychiatric symptoms or behavioral changes. The most striking early symptom is loss of short-term memory (amnesia), which usually manifests as minor forgetfulness that becomes steadily more pronounced with illness progression, with relative preservation of older memories. As the disorder progresses, cognitive (intellectual) impairment extends to the domains of language (aphasia), skilled movements (apraxia), and recognition (agnosia), and functions such as decision-making and planning become impaired.
Parkinson's disease (PD), also known as Parkinson's, is a degenerative disorder of the central nervous system that often impairs motor skills and speech. Parkinson's disease belongs to a group of conditions called movement disorders. It is characterized by muscle rigidity, tremor, a slowing of physical movement (bradykinesia), and in extreme cases, a loss of physical movement (akinesia). The primary symptoms are the results of decreased stimulation of the motor cortex by the basal ganglia, normally caused by the insufficient formation and action of dopamine, which is produced in the dopaminergic neurons of the brain. Secondary symptoms may include high-level cognitive dysfunction and subtle language problems. PD is both chronic and progressive.
Myasthenia gravis is a neuromuscular disease leading to fluctuating muscle weakness and fatigability during simple activities. Weakness is typically caused by circulating antibodies that block acetylcholine receptors at the postsynaptic neuromuscular junction, inhibiting the stimulative effect of the neurotransmitter acetylcholine. Myasthenia is treated with immunosuppressants, cholinesterase inhibitors and, in selected cases, thymectomy.
=== Demyelination ===
Demyelination is a process characterized by the gradual loss of the myelin sheath enveloping nerve fibers. When myelin deteriorates, signal conduction along nerves can be significantly impaired or lost, and the nerve eventually withers. Demyelination may affect both central and peripheral nervous systems, contributing to various neurological disorders such as multiple sclerosis, Guillain-Barré syndrome, and chronic inflammatory demyelinating polyneuropathy. Although demyelination is often caused by an autoimmune reaction, it may also be caused by viral infections, metabolic disorders, trauma, and some medications.
=== Axonal degeneration ===
Although most injury responses include a calcium influx signaling to promote resealing of severed parts, axonal injuries initially lead to acute axonal degeneration, which is the rapid separation of the proximal and distal ends, occurring within 30 minutes of injury. Degeneration follows with swelling of the axolemma, and eventually leads to bead-like formation. Granular disintegration of the axonal cytoskeleton and inner organelles occurs after axolemma degradation. Early changes include accumulation of mitochondria in the paranodal regions at the site of injury. The endoplasmic reticulum degrades and mitochondria swell up and eventually disintegrate. The disintegration is dependent on ubiquitin and calpain proteases (caused by the influx of calcium ions), suggesting that axonal degeneration is an active process that produces complete fragmentation. The process takes about roughly 24 hours in the PNS and longer in the CNS. The signaling pathways leading to axolemma degeneration are unknown.
== Development ==
Neurons develop through the process of neurogenesis, in which neural stem cells divide to produce differentiated neurons. Once fully differentiated they are no longer capable of undergoing mitosis. Neurogenesis primarily occurs during embryonic development.
Neurons initially develop from the neural tube in the embryo. The neural tube has three layers – a ventricular zone, an intermediate zone, and a marginal zone. The ventricular zone surrounds the tube's central canal and becomes the ependyma. Dividing cells of the ventricular zone form the intermediate zone which stretches to the outermost layer of the neural tube called the pial layer. The gray matter of the brain is derived from the intermediate zone. The extensions of the neurons in the intermediate zone make up the marginal zone when myelinated becomes the brain's white matter.
Differentiation of the neurons is ordered by their size. Large motor neurons are first. Smaller sensory neurons together with glial cell differentiate at birth.
Adult neurogenesis can occur and studies of the age of human neurons suggest that this process occurs only for a minority of cells and that the vast majority of neurons in the neocortex form before birth and persist without replacement. The extent to which adult neurogenesis exists in humans, and its contribution to cognition are controversial, with conflicting reports published in 2018.
The body contains a variety of stem cell types that can differentiate into neurons. Researchers found a way to transform human skin cells into nerve cells using transdifferentiation, in which "cells are forced to adopt new identities".
During neurogenesis in the mammalian brain, progenitor and stem cells progress from proliferative divisions to differentiative divisions. This progression leads to the neurons and glia that populate cortical layers. Epigenetic modifications play a key role in regulating gene expression in differentiating neural stem cells, and are critical for cell fate determination in the developing and adult mammalian brain. Epigenetic modifications include DNA cytosine methylation to form 5-methylcytosine and 5-methylcytosine demethylation. DNA cytosine methylation is catalyzed by DNA methyltransferases (DNMTs). Methylcytosine demethylation is catalyzed in several stages by TET enzymes that carry out oxidative reactions (e.g. 5-methylcytosine to 5-hydroxymethylcytosine) and enzymes of the DNA base excision repair (BER) pathway.
At different stages of mammalian nervous system development, two DNA repair processes are employed in the repair of DNA double-strand breaks. These pathways are homologous recombinational repair used in proliferating neural precursor cells, and non-homologous end joining used mainly at later developmental stages
Intercellular communication between developing neurons and microglia is also indispensable for proper neurogenesis and brain development.
== Nerve regeneration ==
Peripheral axons can regrow if they are severed, but one neuron cannot be functionally replaced by one of another type (Llinás' law).
== See also ==
== References ==
== Further reading ==
== External links ==
IBRO (International Brain Research Organization). Fostering neuroscience research especially in less well-funded countries.
NeuronBank an online neuromics tool for cataloging neuronal types and synaptic connectivity.
High Resolution Neuroanatomical Images of Primate and Non-Primate Brains.
The Department of Neuroscience at Wikiversity, which presently offers two courses: Fundamentals of Neuroscience and Comparative Neuroscience.
NIF Search – Neuron Archived 2015-01-22 at the Wayback Machine via the Neuroscience Information Framework
Cell Centered Database – Neuron
Complete list of neuron types according to the Petilla convention, at NeuroLex.
NeuroMorpho.Org an online database of digital reconstructions of neuronal morphology.
Immunohistochemistry Image Gallery: Neuron
Khan Academy: Anatomy of a neuron
Neuron images | Wikipedia/Neuron |
In statistical mechanics, universality is the observation that there are properties for a large class of systems that are independent of the dynamical details of the system. Systems display universality in a scaling limit, when a large number of interacting parts come together. The modern meaning of the term was introduced by Leo Kadanoff in the 1960s, but a simpler version of the concept was already implicit in the van der Waals equation and in the earlier Landau theory of phase transitions, which did not incorporate scaling correctly.
The term is slowly gaining a broader usage in several fields of mathematics, including combinatorics and probability theory, whenever the quantitative features of a structure (such as asymptotic behaviour) can be deduced from a few global parameters appearing in the definition, without requiring knowledge of the details of the system.
The renormalization group provides an intuitively appealing, albeit mathematically non-rigorous, explanation of universality. It classifies operators in a statistical field theory into relevant and irrelevant. Relevant operators are those responsible for perturbations to the free energy, the imaginary time Lagrangian, that will affect the continuum limit, and can be seen at long distances. Irrelevant operators are those that only change the short-distance details. The collection of scale-invariant statistical theories define the universality classes, and the finite-dimensional list of coefficients of relevant operators parametrize the near-critical behavior.
== Universality in statistical mechanics ==
The notion of universality originated in the study of phase transitions in statistical mechanics. A phase transition occurs when a material changes its properties in a dramatic way: water, as it is heated boils and turns into vapor; or a magnet, when heated, loses its magnetism. Phase transitions are characterized by an order parameter, such as the density or the magnetization, that changes as a function of a parameter of the system, such as the temperature. The special value of the parameter at which the system changes its phase is the system's critical point. For systems that exhibit universality, the closer the parameter is to its critical value, the less sensitively the order parameter depends on the details of the system.
If the parameter β is critical at the value βc, then the order parameter a will be well approximated by
a
=
a
0
|
β
−
β
c
|
α
.
{\displaystyle a=a_{0}\left\vert \beta -\beta _{c}\right\vert ^{\alpha }.}
The exponent α is a critical exponent of the system. The remarkable discovery made in the second half of the twentieth century was that very different systems had the same critical exponents .
In 1975, Mitchell Feigenbaum discovered universality in iterated maps.
== Examples ==
Universality gets its name because it is seen in a large variety of physical systems. Examples of universality include:
Avalanches in piles of sand. The likelihood of an avalanche is in power-law proportion to the size of the avalanche, and avalanches are seen to occur at all size scales. This is termed "self-organized criticality" .
The formation and propagation of cracks and tears in materials ranging from steel to rock to paper. The variations of the direction of the tear, or the roughness of a fractured surface, are in power-law proportion to the size scale .
The electrical breakdown of dielectrics, which resemble cracks and tears.
The percolation of fluids through disordered media, such as petroleum through fractured rock beds, or water through filter paper, such as in chromatography. Power-law scaling connects the rate of flow to the distribution of fractures .
The diffusion of molecules in solution, and the phenomenon of diffusion-limited aggregation.
The distribution of rocks of different sizes in an aggregate mixture that is being shaken (with gravity acting on the rocks) .
The appearance of critical opalescence in fluids near a phase transition .
== Theoretical overview ==
One of the important developments in materials science in the 1970s and the 1980s was the realization that statistical field theory, similar to quantum field theory, could be used to provide a microscopic theory of universality . The core observation was that, for all of the different systems, the behaviour at a phase transition is described by a continuum field, and that the same statistical field theory will describe different systems. The scaling exponents in all of these systems can be derived from the field theory alone, and are known as critical exponents.
The key observation is that near a phase transition or critical point, disturbances occur at all size scales, and thus one should look for an explicitly scale-invariant theory to describe the phenomena, as seems to have been put in a formal theoretical framework first by Pokrovsky and Patashinsky in 1965 . Universality is a by-product of the fact that there are relatively few scale-invariant theories. For any one specific physical system, the detailed description may have many scale-dependent parameters and aspects. However, as the phase transition is approached, the scale-dependent parameters play less and less of an important role, and the scale-invariant parts of the physical description dominate. Thus, a simplified, and often exactly solvable, model can be used to approximate the behaviour of these systems near the critical point.
Percolation may be modeled by a random electrical resistor network, with electricity flowing from one side of the network to the other. The overall resistance of the network is seen to be described by the average connectivity of the resistors in the network .
The formation of tears and cracks may be modeled by a random network of electrical fuses. As the electric current flow through the network is increased, some fuses may pop, but on the whole, the current is shunted around the problem areas, and uniformly distributed. However, at a certain point (at the phase transition) a cascade failure may occur, where the excess current from one popped fuse overloads the next fuse in turn, until the two sides of the net are completely disconnected and no more current flows .
To perform the analysis of such random-network systems, one considers the stochastic space of all possible networks (that is, the canonical ensemble), and performs a summation (integration) over all possible network configurations. As in the previous discussion, each given random configuration is understood to be drawn from the pool of all configurations with some given probability distribution; the role of temperature in the distribution is typically replaced by the average connectivity of the network .
The expectation values of operators, such as the rate of flow, the heat capacity, and so on, are obtained by integrating over all possible configurations. This act of integration over all possible configurations is the point of commonality between systems in statistical mechanics and quantum field theory. In particular, the language of the renormalization group may be applied to the discussion of the random network models. In the 1990s and 2000s, stronger connections between the statistical models and conformal field theory were uncovered. The study of universality remains a vital area of research.
== Applications to other fields ==
Like other concepts from statistical mechanics (such as entropy and master equations), universality has proven a useful construct for characterizing distributed systems at a higher level, such as multi-agent systems. The term has been applied to multi-agent simulations, where the system-level behavior exhibited by the system is independent of the degree of complexity of the individual agents, being driven almost entirely by the nature of the constraints governing their interactions. In network dynamics, universality refers to the fact that despite the diversity of nonlinear dynamic models, which differ in many details, the observed behavior of many different systems adheres to a set of universal laws. These laws are independent of the specific details of each system.
== References == | Wikipedia/Universality_(dynamical_systems) |
In the theory of dynamical systems, the shadowing lemma is a lemma describing the behaviour of pseudo-orbits near a hyperbolic invariant set. Informally, the theory states that every pseudo-orbit (which one can think of as a numerically computed trajectory with rounding errors on every step) stays uniformly close to some true trajectory (with slightly altered initial position)—in other words, a pseudo-trajectory is "shadowed" by a true one. This suggests that numerical solutions can be trusted to represent the orbits of the dynamical system. However, caution should be exercised as some shadowing trajectories may not always be physically realizable.
== Formal statement ==
Given a map f : X → X of a metric space (X, d) to itself, define a ε-pseudo-orbit (or ε-orbit) as a sequence
(
x
n
)
{\displaystyle (x_{n})}
of points such that
x
n
+
1
{\displaystyle x_{n+1}}
belongs to a ε-neighborhood of
f
(
x
n
)
{\displaystyle f(x_{n})}
.
Then, near a hyperbolic invariant set, the following statement holds:
Let Λ be a hyperbolic invariant set of a diffeomorphism f. There exists a neighborhood U of Λ with the following property: for any δ > 0 there exists ε > 0, such that any (finite or infinite) ε-pseudo-orbit that stays in U also stays in a δ-neighborhood of some true orbit.
∀
(
x
n
)
,
x
n
∈
U
,
d
(
x
n
+
1
,
f
(
x
n
)
)
<
ε
∃
(
y
n
)
,
y
n
+
1
=
f
(
y
n
)
,
such that
∀
n
x
n
∈
U
δ
(
y
n
)
.
{\displaystyle \forall (x_{n}),\,x_{n}\in U,\,d(x_{n+1},f(x_{n}))<\varepsilon \quad \exists (y_{n}),\,\,y_{n+1}=f(y_{n}),\quad {\text{such that}}\,\,\forall n\,\,x_{n}\in U_{\delta }(y_{n}).}
== See also ==
Chaotic systems
Butterfly effect
== References ==
== External links ==
Shadowing Theorem on Scholarpedia
Can a butterfly in Brazil control the climate of Texas? | Wikipedia/Shadowing_lemma |
Stein's lemma, named in honor of Charles Stein, is a theorem of probability theory that is of interest primarily because of its applications to statistical inference — in particular, to James–Stein estimation and empirical Bayes methods — and its applications to portfolio choice theory. The theorem gives a formula for the covariance of one random variable with the value of a function of another, when the two random variables are jointly normally distributed.
Note that the name "Stein's lemma" is also commonly used to refer to a different result in the area of statistical hypothesis testing, which connects the error exponents in hypothesis testing with the Kullback–Leibler divergence. This result is also known as the Chernoff–Stein lemma and is not related to the lemma discussed in this article.
== Statement ==
Suppose X is a normally distributed random variable with expectation μ and variance σ2.
Further suppose g is a differentiable function for which the two expectations
E
(
g
(
X
)
(
X
−
μ
)
)
{\displaystyle \operatorname {E} (g(X)(X-\mu ))}
and
E
(
g
′
(
X
)
)
{\displaystyle \operatorname {E} (g'(X))}
both exist.
(The existence of the expectation of any random variable is equivalent to the finiteness of the expectation of its absolute value.)
Then
E
(
g
(
X
)
(
X
−
μ
)
)
=
σ
2
E
(
g
′
(
X
)
)
.
{\displaystyle \operatorname {E} {\bigl (}g(X)(X-\mu ){\bigr )}=\sigma ^{2}\operatorname {E} {\bigl (}g'(X){\bigr )}.}
=== Multidimensional ===
In general, suppose X and Y are jointly normally distributed. Then
Cov
(
g
(
X
)
,
Y
)
=
Cov
(
X
,
Y
)
E
(
g
′
(
X
)
)
.
{\displaystyle \operatorname {Cov} (g(X),Y)=\operatorname {Cov} (X,Y)\operatorname {E} (g'(X)).}
For a general multivariate Gaussian random vector
(
X
1
,
.
.
.
,
X
n
)
∼
N
(
μ
,
Σ
)
{\displaystyle (X_{1},...,X_{n})\sim {\mathcal {N}}(\mu ,\Sigma )}
it follows that
E
(
g
(
X
)
(
X
−
μ
)
)
=
Σ
⋅
E
(
∇
g
(
X
)
)
.
{\displaystyle \operatorname {E} {\bigl (}g(X)(X-\mu ){\bigr )}=\Sigma \cdot E{\bigl (}\nabla g(X){\bigr )}.}
Similarly, when
μ
=
0
{\displaystyle \mu =0}
,
E
[
∂
i
g
(
X
)
]
=
E
[
g
(
X
)
(
Σ
−
1
X
)
i
]
,
E
[
∂
i
∂
j
g
(
X
)
]
=
E
[
g
(
X
)
(
(
Σ
−
1
X
)
i
(
Σ
−
1
X
)
j
−
Σ
i
j
−
1
)
]
{\displaystyle \operatorname {E} [\partial _{i}g(X)]=\operatorname {E} [g(X)(\Sigma ^{-1}X)_{i}],\quad \operatorname {E} [\partial _{i}\partial _{j}g(X)]=\operatorname {E} [g(X)((\Sigma ^{-1}X)_{i}(\Sigma ^{-1}X)_{j}-\Sigma _{ij}^{-1})]}
=== Gradient descent ===
Stein's lemma can be used to stochastically estimate gradient:
∇
E
ϵ
∼
N
(
0
,
I
)
(
g
(
x
+
Σ
1
/
2
ϵ
)
)
=
Σ
−
1
/
2
E
ϵ
∼
N
(
0
,
I
)
(
g
(
x
+
Σ
1
/
2
ϵ
)
ϵ
)
≈
Σ
−
1
/
2
1
N
∑
i
=
1
N
g
(
x
+
Σ
1
/
2
ϵ
i
)
ϵ
i
{\displaystyle \nabla \operatorname {E} _{\epsilon \sim {\mathcal {N}}(0,I)}{\bigl (}g(x+\Sigma ^{1/2}\epsilon ){\bigr )}=\Sigma ^{-1/2}\operatorname {E} _{\epsilon \sim {\mathcal {N}}(0,I)}{\bigl (}g(x+\Sigma ^{1/2}\epsilon )\epsilon {\bigr )}\approx \Sigma ^{-1/2}{\frac {1}{N}}\sum _{i=1}^{N}g(x+\Sigma ^{1/2}\epsilon _{i})\epsilon _{i}}
where
ϵ
1
,
…
,
ϵ
N
{\displaystyle \epsilon _{1},\dots ,\epsilon _{N}}
are IID samples from the standard normal distribution
N
(
0
,
I
)
{\displaystyle {\mathcal {N}}(0,I)}
. This form has applications in Stein variational gradient descent and Stein variational policy gradient.
== Proof ==
The univariate probability density function for the univariate normal distribution with expectation 0 and variance 1 is
φ
(
x
)
=
1
2
π
e
−
x
2
/
2
{\displaystyle \varphi (x)={1 \over {\sqrt {2\pi }}}e^{-x^{2}/2}}
Since
∫
x
exp
(
−
x
2
/
2
)
d
x
=
−
exp
(
−
x
2
/
2
)
{\displaystyle \int x\exp(-x^{2}/2)\,dx=-\exp(-x^{2}/2)}
we get from integration by parts:
E
[
g
(
X
)
X
]
=
1
2
π
∫
g
(
x
)
x
exp
(
−
x
2
/
2
)
d
x
=
1
2
π
∫
g
′
(
x
)
exp
(
−
x
2
/
2
)
d
x
=
E
[
g
′
(
X
)
]
{\displaystyle \operatorname {E} [g(X)X]={\frac {1}{\sqrt {2\pi }}}\int g(x)x\exp(-x^{2}/2)\,dx={\frac {1}{\sqrt {2\pi }}}\int g'(x)\exp(-x^{2}/2)\,dx=\operatorname {E} [g'(X)]}
.
The case of general variance
σ
2
{\displaystyle \sigma ^{2}}
follows by substitution.
== Generalizations ==
Isserlis' theorem is equivalently stated as
E
(
X
1
f
(
X
1
,
…
,
X
n
)
)
=
∑
i
=
1
n
Cov
(
X
1
,
X
i
)
E
(
∂
X
i
f
(
X
1
,
…
,
X
n
)
)
.
{\displaystyle \operatorname {E} (X_{1}f(X_{1},\ldots ,X_{n}))=\sum _{i=1}^{n}\operatorname {Cov} (X_{1},X_{i})\operatorname {E} (\partial _{X_{i}}f(X_{1},\ldots ,X_{n})).}
where
(
X
1
,
…
X
n
)
{\displaystyle (X_{1},\dots X_{n})}
is a zero-mean multivariate normal random vector.
Suppose X is in an exponential family, that is, X has the density
f
η
(
x
)
=
exp
(
η
′
T
(
x
)
−
Ψ
(
η
)
)
h
(
x
)
.
{\displaystyle f_{\eta }(x)=\exp(\eta 'T(x)-\Psi (\eta ))h(x).}
Suppose this density has support
(
a
,
b
)
{\displaystyle (a,b)}
where
a
,
b
{\displaystyle a,b}
could be
−
∞
,
∞
{\displaystyle -\infty ,\infty }
and as
x
→
a
or
b
{\displaystyle x\rightarrow a{\text{ or }}b}
,
exp
(
η
′
T
(
x
)
)
h
(
x
)
g
(
x
)
→
0
{\displaystyle \exp(\eta 'T(x))h(x)g(x)\rightarrow 0}
where
g
{\displaystyle g}
is any differentiable function such that
E
|
g
′
(
X
)
|
<
∞
{\displaystyle E|g'(X)|<\infty }
or
exp
(
η
′
T
(
x
)
)
h
(
x
)
→
0
{\displaystyle \exp(\eta 'T(x))h(x)\rightarrow 0}
if
a
,
b
{\displaystyle a,b}
finite. Then
E
[
(
h
′
(
X
)
h
(
X
)
+
∑
η
i
T
i
′
(
X
)
)
⋅
g
(
X
)
]
=
−
E
[
g
′
(
X
)
]
.
{\displaystyle E\left[\left({\frac {h'(X)}{h(X)}}+\sum \eta _{i}T_{i}'(X)\right)\cdot g(X)\right]=-E[g'(X)].}
The derivation is same as the special case, namely, integration by parts.
If we only know
X
{\displaystyle X}
has support
R
{\displaystyle \mathbb {R} }
, then it could be the case that
E
|
g
(
X
)
|
<
∞
and
E
|
g
′
(
X
)
|
<
∞
{\displaystyle E|g(X)|<\infty {\text{ and }}E|g'(X)|<\infty }
but
lim
x
→
∞
f
η
(
x
)
g
(
x
)
≠
0
{\displaystyle \lim _{x\rightarrow \infty }f_{\eta }(x)g(x)\not =0}
. To see this, simply put
g
(
x
)
=
1
{\displaystyle g(x)=1}
and
f
η
(
x
)
{\displaystyle f_{\eta }(x)}
with infinitely spikes towards infinity but still integrable. One such example could be adapted from
f
(
x
)
=
{
1
x
∈
[
n
,
n
+
2
−
n
)
0
otherwise
{\displaystyle f(x)={\begin{cases}1&x\in [n,n+2^{-n})\\0&{\text{otherwise}}\end{cases}}}
so that
f
{\displaystyle f}
is smooth.
Extensions to elliptically-contoured distributions also exist.
== See also ==
Stein's method
Taylor expansions for the moments of functions of random variables
Stein discrepancy
== References == | Wikipedia/Stein's_lemma |
A random r-regular graph is a graph selected from
G
n
,
r
{\displaystyle {\mathcal {G}}_{n,r}}
, which denotes the probability space of all r-regular graphs on
n
{\displaystyle n}
vertices, where
3
≤
r
<
n
{\displaystyle 3\leq r<n}
and
n
r
{\displaystyle nr}
is even. It is therefore a particular kind of random graph, but the regularity restriction significantly alters the properties that will hold, since most graphs are not regular.
== Properties of random regular graphs ==
As with more general random graphs, it is possible to prove that certain properties of random
m
{\displaystyle m}
–regular graphs hold asymptotically almost surely. In particular, for
r
≥
3
{\displaystyle r\geq 3}
, a random r-regular graph of large size is asymptotically almost surely r-connected. In other words, although
r
{\displaystyle r}
–regular graphs with connectivity less than
r
{\displaystyle r}
exist, the probability of selecting such a graph tends to 0 as
n
{\displaystyle n}
increases.
If
ϵ
>
0
{\displaystyle \epsilon >0}
is a positive constant, and
d
{\displaystyle d}
is the least integer satisfying
(
r
−
1
)
d
−
1
≥
(
2
+
ϵ
)
r
n
ln
n
{\displaystyle (r-1)^{d-1}\geq (2+\epsilon )rn\ln n}
then, asymptotically almost surely, a random r-regular graph has diameter at most d. There is also a (more complex) lower bound on the diameter of r-regular graphs, so that almost all r-regular graphs (of the same size) have almost the same diameter.
The distribution of the number of short cycles is also known: for fixed
m
≥
3
{\displaystyle m\geq 3}
, let
Y
3
,
Y
4
,
.
.
.
Y
m
{\displaystyle Y_{3},Y_{4},...Y_{m}}
be the number of cycles of lengths up to
m
{\displaystyle m}
. Then the
Y
i
{\displaystyle Y_{i}}
are asymptotically independent Poisson random variables with means
λ
i
=
(
r
−
1
)
i
2
i
{\displaystyle \lambda _{i}={\frac {(r-1)^{i}}{2i}}}
== Algorithms for random regular graphs ==
It is non-trivial to implement the random selection of r-regular graphs efficiently and in an unbiased way, since most graphs are not regular. The pairing model (also configuration model) is a method which takes nr points, and partitions them into n buckets with r points in each of them. Taking a random matching of the nr points, and then contracting the r points in each bucket into a single vertex, yields an r-regular graph or multigraph. If this object has no multiple edges or loops (i.e. it is a graph), then it is the required result. If not, a restart is required.
A refinement of this method was developed by Brendan McKay and Nicholas Wormald.
== References == | Wikipedia/Random_regular_graph |
The Hann function is named after the Austrian meteorologist Julius von Hann. It is a window function used to perform Hann smoothing or hanning. The function, with length
L
{\displaystyle L}
and amplitude
1
/
L
,
{\displaystyle 1/L,}
is given by:
w
0
(
x
)
≜
{
1
L
(
1
2
+
1
2
cos
(
2
π
x
L
)
)
=
1
L
cos
2
(
π
x
L
)
,
|
x
|
≤
L
/
2
0
,
|
x
|
>
L
/
2
}
.
{\displaystyle w_{0}(x)\triangleq \left\{{\begin{array}{ccl}{\tfrac {1}{L}}\left({\tfrac {1}{2}}+{\tfrac {1}{2}}\cos \left({\frac {2\pi x}{L}}\right)\right)={\tfrac {1}{L}}\cos ^{2}\left({\frac {\pi x}{L}}\right),\quad &\left|x\right|\leq L/2\\0,\quad &\left|x\right|>L/2\end{array}}\right\}.}
For digital signal processing, the function is sampled symmetrically (with spacing
L
/
N
{\displaystyle L/N}
and amplitude
1
{\displaystyle 1}
):
w
[
n
]
=
L
⋅
w
0
(
L
N
(
n
−
N
/
2
)
)
=
1
2
[
1
−
cos
(
2
π
n
N
)
]
=
sin
2
(
π
n
N
)
}
,
0
≤
n
≤
N
,
{\displaystyle \left.{\begin{aligned}w[n]=L\cdot w_{0}\left({\tfrac {L}{N}}(n-N/2)\right)&={\tfrac {1}{2}}\left[1-\cos \left({\tfrac {2\pi n}{N}}\right)\right]\\&=\sin ^{2}\left({\tfrac {\pi n}{N}}\right)\end{aligned}}\right\},\quad 0\leq n\leq N,}
which is a sequence of
N
+
1
{\displaystyle N+1}
samples, and
N
{\displaystyle N}
can be even or odd. It is also known as the raised cosine window, Hann filter, von Hann window, Hanning window, etc.
== Fourier transform ==
The Fourier transform of
w
0
(
x
)
{\displaystyle w_{0}(x)}
is given by:
W
0
(
f
)
=
1
2
sinc
(
L
f
)
(
1
−
L
2
f
2
)
=
sin
(
π
L
f
)
2
π
L
f
(
1
−
L
2
f
2
)
{\displaystyle W_{0}(f)={\frac {1}{2}}{\frac {\operatorname {sinc} (Lf)}{(1-L^{2}f^{2})}}={\frac {\sin(\pi Lf)}{2\pi Lf(1-L^{2}f^{2})}}}
== Discrete transforms ==
The Discrete-time Fourier transform (DTFT) of the
N
+
1
{\displaystyle N+1}
length, time-shifted sequence is defined by a Fourier series, which also has a 3-term equivalent that is derived similarly to the Fourier transform derivation:
F
{
w
[
n
]
}
≜
∑
n
=
0
N
w
[
n
]
⋅
e
−
i
2
π
f
n
=
e
−
i
π
f
N
[
1
2
sin
(
π
(
N
+
1
)
f
)
sin
(
π
f
)
+
1
4
sin
(
π
(
N
+
1
)
(
f
−
1
N
)
)
sin
(
π
(
f
−
1
N
)
)
+
1
4
sin
(
π
(
N
+
1
)
(
f
+
1
N
)
)
sin
(
π
(
f
+
1
N
)
)
]
.
{\displaystyle {\begin{aligned}{\mathcal {F}}\{w[n]\}&\triangleq \sum _{n=0}^{N}w[n]\cdot e^{-i2\pi fn}\\&=e^{-i\pi fN}\left[{\tfrac {1}{2}}{\frac {\sin(\pi (N+1)f)}{\sin(\pi f)}}+{\tfrac {1}{4}}{\frac {\sin(\pi (N+1)(f-{\tfrac {1}{N}}))}{\sin(\pi (f-{\tfrac {1}{N}}))}}+{\tfrac {1}{4}}{\frac {\sin(\pi (N+1)(f+{\tfrac {1}{N}}))}{\sin(\pi (f+{\tfrac {1}{N}}))}}\right].\end{aligned}}}
The truncated sequence
{
w
[
n
]
,
0
≤
n
≤
N
−
1
}
{\displaystyle \{w[n],\ 0\leq n\leq N-1\}}
is a DFT-even (aka periodic) Hann window. Since the truncated sample has value zero, it is clear from the Fourier series definition that the DTFTs are equivalent. However, the approach followed above results in a significantly different-looking, but equivalent, 3-term expression:
F
{
w
[
n
]
}
=
e
−
i
π
f
(
N
−
1
)
[
1
2
sin
(
π
N
f
)
sin
(
π
f
)
+
1
4
e
−
i
π
/
N
sin
(
π
N
(
f
−
1
N
)
)
sin
(
π
(
f
−
1
N
)
)
+
1
4
e
i
π
/
N
sin
(
π
N
(
f
+
1
N
)
)
sin
(
π
(
f
+
1
N
)
)
]
.
{\displaystyle {\mathcal {F}}\{w[n]\}=e^{-i\pi f(N-1)}\left[{\tfrac {1}{2}}{\frac {\sin(\pi Nf)}{\sin(\pi f)}}+{\tfrac {1}{4}}e^{-i\pi /N}{\frac {\sin(\pi N(f-{\tfrac {1}{N}}))}{\sin(\pi (f-{\tfrac {1}{N}}))}}+{\tfrac {1}{4}}e^{i\pi /N}{\frac {\sin(\pi N(f+{\tfrac {1}{N}}))}{\sin(\pi (f+{\tfrac {1}{N}}))}}\right].}
An N-length DFT of the window function samples the DTFT at frequencies
f
=
k
/
N
,
{\displaystyle f=k/N,}
for integer values of
k
.
{\displaystyle k.}
From the expression immediately above, it is easy to see that only 3 of the N DFT coefficients are non-zero. And from the other expression, it is apparent that all are real-valued. These properties are appealing for real-time applications that require both windowed and non-windowed (rectangularly windowed) transforms, because the windowed transforms can be efficiently derived from the non-windowed transforms by convolution.
== Name ==
The function is named in honor of von Hann, who used the three-term weighted average smoothing technique on meteorological data. However, the term Hanning function is also conventionally used, derived from the paper in which the term hanning a signal was used to mean applying the Hann window to it. It is distinct from the similarly-named Hamming function, named after Richard Hamming.
== See also ==
Window function
Apodization
Raised cosine distribution
Raised-cosine filter
== Page citations ==
== References ==
== External links ==
Hann function at MathWorld | Wikipedia/Hann_function |
In statistics, the Khmaladze transformation is a mathematical tool used in constructing convenient goodness of fit tests for hypothetical distribution functions. More precisely, suppose
X
1
,
…
,
X
n
{\displaystyle X_{1},\ldots ,X_{n}}
are i.i.d., possibly multi-dimensional, random observations generated from an unknown probability distribution. A classical problem in statistics is to decide how well a given hypothetical distribution function
F
{\displaystyle F}
, or a given hypothetical parametric family of distribution functions
{
F
θ
:
θ
∈
Θ
}
{\displaystyle \{F_{\theta }:\theta \in \Theta \}}
, fits the set of observations. The Khmaladze transformation allows us to construct goodness of fit tests with desirable properties. It is named after Estate V. Khmaladze.
Consider the sequence of empirical distribution functions
F
n
{\displaystyle F_{n}}
based on a sequence of i.i.d random variables,
X
1
,
…
,
X
n
{\displaystyle X_{1},\ldots ,X_{n}}
, as n increases. Suppose
F
{\displaystyle F}
is the hypothetical distribution function of each
X
i
{\displaystyle X_{i}}
. To test whether the choice of
F
{\displaystyle F}
is correct or not, statisticians use the normalized difference,
v
n
(
x
)
=
n
[
F
n
(
x
)
−
F
(
x
)
]
.
{\displaystyle v_{n}(x)={\sqrt {n}}[F_{n}(x)-F(x)].}
This
v
n
{\displaystyle v_{n}}
, as a random process in
x
{\displaystyle x}
, is called the empirical process. Various functionals of
v
n
{\displaystyle v_{n}}
are used as test statistics. The change of the variable
v
n
(
x
)
=
u
n
(
t
)
{\displaystyle v_{n}(x)=u_{n}(t)}
,
t
=
F
(
x
)
{\displaystyle t=F(x)}
transforms to the so-called uniform empirical process
u
n
{\displaystyle u_{n}}
. The latter is an empirical processes based on independent random variables
U
i
=
F
(
X
i
)
{\displaystyle U_{i}=F(X_{i})}
, which are uniformly distributed on
[
0
,
1
]
{\displaystyle [0,1]}
if the
X
i
{\displaystyle X_{i}}
s do indeed have distribution function
F
{\displaystyle F}
.
This fact was discovered and first utilized by Kolmogorov (1933), Wald and Wolfowitz (1936) and Smirnov (1937) and, especially after Doob (1949) and Anderson and Darling (1952), it led to the standard rule to choose test statistics based on
v
n
{\displaystyle v_{n}}
. That is, test statistics
ψ
(
v
n
,
F
)
{\displaystyle \psi (v_{n},F)}
are defined (which possibly depend on the
F
{\displaystyle F}
being tested) in such a way that there exists another statistic
φ
(
u
n
)
{\displaystyle \varphi (u_{n})}
derived from the uniform empirical process, such that
ψ
(
v
n
,
F
)
=
φ
(
u
n
)
{\displaystyle \psi (v_{n},F)=\varphi (u_{n})}
. Examples are
sup
x
|
v
n
(
x
)
|
=
sup
t
|
u
n
(
t
)
|
,
sup
x
|
v
n
(
x
)
|
a
(
F
(
x
)
)
=
sup
t
|
u
n
(
t
)
|
a
(
t
)
{\displaystyle \sup _{x}|v_{n}(x)|=\sup _{t}|u_{n}(t)|,\quad \sup _{x}{\frac {|v_{n}(x)|}{a(F(x))}}=\sup _{t}{\frac {|u_{n}(t)|}{a(t)}}}
and
∫
−
∞
∞
v
n
2
(
x
)
d
F
(
x
)
=
∫
0
1
u
n
2
(
t
)
d
t
.
{\displaystyle \int _{-\infty }^{\infty }v_{n}^{2}(x)\,dF(x)=\int _{0}^{1}u_{n}^{2}(t)\,dt.}
For all such functionals, their null distribution (under the hypothetical
F
{\displaystyle F}
) does not depend on
F
{\displaystyle F}
, and can be calculated once and then used to test any
F
{\displaystyle F}
.
However, it is only rarely that one needs to test a simple hypothesis, when a fixed
F
{\displaystyle F}
as a hypothesis is given. Much more often, one needs to verify parametric hypotheses where the hypothetical
F
=
F
θ
n
{\displaystyle F=F_{\theta _{n}}}
, depends on some parameters
θ
n
{\displaystyle \theta _{n}}
, which the hypothesis does not specify and which have to be estimated from the sample
X
1
,
…
,
X
n
{\displaystyle X_{1},\ldots ,X_{n}}
itself.
Although the estimators
θ
^
n
{\displaystyle {\hat {\theta }}_{n}}
, most commonly converge to true value of
θ
{\displaystyle \theta }
, it was discovered that the parametric, or estimated, empirical process
v
^
n
(
x
)
=
n
[
F
n
(
x
)
−
F
θ
^
n
(
x
)
]
{\displaystyle {\hat {v}}_{n}(x)={\sqrt {n}}[F_{n}(x)-F_{{\hat {\theta }}_{n}}(x)]}
differs significantly from
v
n
{\displaystyle v_{n}}
and that the transformed process
u
^
n
(
t
)
=
v
^
n
(
x
)
{\displaystyle {\hat {u}}_{n}(t)={\hat {v}}_{n}(x)}
,
t
=
F
θ
^
n
(
x
)
{\displaystyle t=F_{{\hat {\theta }}_{n}}(x)}
has a distribution for which the limit distribution, as
n
→
∞
{\displaystyle n\to \infty }
, is dependent on the parametric form of
F
θ
{\displaystyle F_{\theta }}
and on the particular estimator
θ
^
n
{\displaystyle {\hat {\theta }}_{n}}
and, in general, within one parametric family, on the value of
θ
{\displaystyle \theta }
.
From mid-1950s to the late-1980s, much work was done to clarify the situation and understand the nature of the process
v
^
n
{\displaystyle {\hat {v}}_{n}}
.
In 1981, and then 1987 and 1993, Khmaladze suggested to replace the parametric empirical process
v
^
n
{\displaystyle {\hat {v}}_{n}}
by its martingale part
w
n
{\displaystyle w_{n}}
only.
v
^
n
(
x
)
−
K
n
(
x
)
=
w
n
(
x
)
{\displaystyle {\hat {v}}_{n}(x)-K_{n}(x)=w_{n}(x)}
where
K
n
(
x
)
{\displaystyle K_{n}(x)}
is the compensator of
v
^
n
(
x
)
{\displaystyle {\hat {v}}_{n}(x)}
. Then the following properties of
w
n
{\displaystyle w_{n}}
were established:
Although the form of
K
n
{\displaystyle K_{n}}
, and therefore, of
w
n
{\displaystyle w_{n}}
, depends on
F
θ
^
n
(
x
)
{\displaystyle F_{{\hat {\theta }}_{n}}(x)}
, as a function of both
x
{\displaystyle x}
and
θ
n
{\displaystyle \theta _{n}}
, the limit distribution of the time transformed process
ω
n
(
t
)
=
w
n
(
x
)
,
t
=
F
θ
^
n
(
x
)
{\displaystyle \omega _{n}(t)=w_{n}(x),t=F_{{\hat {\theta }}_{n}}(x)}
is that of standard Brownian motion on
[
0
,
1
]
{\displaystyle [0,1]}
, i.e., is again standard and independent of the choice of
F
θ
^
n
{\displaystyle F_{{\hat {\theta }}_{n}}}
.
The relationship between
v
^
n
{\displaystyle {\hat {v}}_{n}}
and
w
n
{\displaystyle w_{n}}
and between their limits, is one to one, so that the statistical inference based on
v
^
n
{\displaystyle {\hat {v}}_{n}}
or on
w
n
{\displaystyle w_{n}}
are equivalent, and in
w
n
{\displaystyle w_{n}}
, nothing is lost compared to
v
^
n
{\displaystyle {\hat {v}}_{n}}
.
The construction of innovation martingale
w
n
{\displaystyle w_{n}}
could be carried over to the case of vector-valued
X
1
,
…
,
X
n
{\displaystyle X_{1},\ldots ,X_{n}}
, giving rise to the definition of the so-called scanning martingales in
R
d
{\displaystyle \mathbb {R} ^{d}}
.
For a long time the transformation was, although known, still not used. Later, the work of researchers like Koenker, Stute, Bai, Koul, Koening, and others made it popular in econometrics and other fields of statistics.
== See also ==
Empirical process
== References ==
== Further reading ==
Koul, H. L.; Swordson, E. (2011). "Khmaladze transformation". International Encyclopedia of Statistical Science. Springer. pp. 715–718. doi:10.1007/978-3-642-04898-2_325. ISBN 978-3-642-04897-5. | Wikipedia/Khmaladze_transformation |
In combinatorial mathematics and probability theory, the Schrödinger method, named after the Austrian physicist Erwin Schrödinger, is used to solve some problems of distribution and occupancy.
Suppose
X
1
,
…
,
X
n
{\displaystyle X_{1},\dots ,X_{n}\,}
are independent random variables that are uniformly distributed on the interval [0, 1]. Let
X
(
1
)
,
…
,
X
(
n
)
{\displaystyle X_{(1)},\dots ,X_{(n)}\,}
be the corresponding order statistics, i.e., the result of sorting these n random variables into increasing order. We seek the probability of some event A defined in terms of these order statistics. For example, we might seek the probability that in a certain seven-day period there were at most two days in on which only one phone call was received, given that the number of phone calls during that time was 20. This assumes uniform distribution of arrival times.
The Schrödinger method begins by assigning a Poisson distribution with expected value λt to the number of observations in the interval [0, t], the number of observations in non-overlapping subintervals being independent (see Poisson process). The number N of observations is Poisson-distributed with expected value λ. Then we rely on the fact that the conditional probability
P
(
A
∣
N
=
n
)
{\displaystyle P(A\mid N=n)\,}
does not depend on λ (in the language of statisticians, N is a sufficient statistic for this parametrized family of probability distributions for the order statistics). We proceed as follows:
P
λ
(
A
)
=
∑
n
=
0
∞
P
(
A
∣
N
=
n
)
P
(
N
=
n
)
=
∑
n
=
0
∞
P
(
A
∣
N
=
n
)
λ
n
e
−
λ
n
!
,
{\displaystyle P_{\lambda }(A)=\sum _{n=0}^{\infty }P(A\mid N=n)P(N=n)=\sum _{n=0}^{\infty }P(A\mid N=n){\lambda ^{n}e^{-\lambda } \over n!},}
so that
e
λ
P
λ
(
A
)
=
∑
n
=
0
∞
P
(
A
∣
N
=
n
)
λ
n
n
!
.
{\displaystyle e^{\lambda }\,P_{\lambda }(A)=\sum _{n=0}^{\infty }P(A\mid N=n){\lambda ^{n} \over n!}.}
Now the lack of dependence of P(A | N = n) upon λ entails that the last sum displayed above is a power series in λ and P(A | N = n) is the value of its nth derivative at λ = 0, i.e.,
P
(
A
∣
N
=
n
)
=
[
d
n
d
λ
n
(
e
λ
P
λ
(
A
)
)
]
λ
=
0
.
{\displaystyle P(A\mid N=n)=\left[{d^{n} \over d\lambda ^{n}}\left(e^{\lambda }\,P_{\lambda }(A)\right)\right]_{\lambda =0}.}
For this method to be of any use in finding P(A | N =n), must be possible to find Pλ(A) more directly than P(A | N = n). What makes that possible is the independence of the numbers of arrivals in non-overlapping subintervals. | Wikipedia/Schrödinger_method |
In statistics, the Anscombe transform, named after Francis Anscombe, is a variance-stabilizing transformation that transforms a random variable with a Poisson distribution into one with an approximately standard Gaussian distribution. The Anscombe transform is widely used in photon-limited imaging (astronomy, X-ray) where images naturally follow the Poisson law. The Anscombe transform is usually used to pre-process the data in order to make the standard deviation approximately constant. Then denoising algorithms designed for the framework of additive white Gaussian noise are used; the final estimate is then obtained by applying an inverse Anscombe transformation to the denoised data.
== Definition ==
For the Poisson distribution the mean
m
{\displaystyle m}
and variance
v
{\displaystyle v}
are not independent:
m
=
v
{\displaystyle m=v}
. The Anscombe transform
A
:
x
↦
2
x
+
3
8
{\displaystyle A:x\mapsto 2{\sqrt {x+{\tfrac {3}{8}}}}\,}
aims at transforming the data so that the variance is set approximately 1 for large enough mean; for mean zero, the variance is still zero.
It transforms Poissonian data
x
{\displaystyle x}
(with mean
m
{\displaystyle m}
) to approximately Gaussian data of mean
2
m
+
3
8
−
1
4
m
1
/
2
+
O
(
1
m
3
/
2
)
{\displaystyle 2{\sqrt {m+{\tfrac {3}{8}}}}-{\tfrac {1}{4\,m^{1/2}}}+O\left({\tfrac {1}{m^{3/2}}}\right)}
and standard deviation
1
+
O
(
1
m
2
)
{\displaystyle 1+O\left({\tfrac {1}{m^{2}}}\right)}
.
This approximation gets more accurate for larger
m
{\displaystyle m}
, as can be also seen in the figure.
For a transformed variable of the form
2
x
+
c
{\displaystyle 2{\sqrt {x+c}}}
, the expression for the variance has an additional term
3
8
−
c
m
{\displaystyle {\frac {{\tfrac {3}{8}}-c}{m}}}
; it is reduced to zero at
c
=
3
8
{\displaystyle c={\tfrac {3}{8}}}
, which is exactly the reason why this value was picked.
== Inversion ==
When the Anscombe transform is used in denoising (i.e. when the goal is to obtain from
x
{\displaystyle x}
an estimate of
m
{\displaystyle m}
), its inverse transform is also needed
in order to return the variance-stabilized and denoised data
y
{\displaystyle y}
to the original range.
Applying the algebraic inverse
A
−
1
:
y
↦
(
y
2
)
2
−
3
8
{\displaystyle A^{-1}:y\mapsto \left({\frac {y}{2}}\right)^{2}-{\frac {3}{8}}}
usually introduces undesired bias to the estimate of the mean
m
{\displaystyle m}
, because the forward square-root
transform is not linear. Sometimes using the asymptotically unbiased inverse
y
↦
(
y
2
)
2
−
1
8
{\displaystyle y\mapsto \left({\frac {y}{2}}\right)^{2}-{\frac {1}{8}}}
mitigates the issue of bias, but this is not the case in photon-limited imaging, for which
the exact unbiased inverse given by the implicit mapping
E
[
2
x
+
3
8
∣
m
]
=
2
∑
x
=
0
+
∞
(
x
+
3
8
⋅
m
x
e
−
m
x
!
)
↦
m
{\displaystyle \operatorname {E} \left[2{\sqrt {x+{\tfrac {3}{8}}}}\mid m\right]=2\sum _{x=0}^{+\infty }\left({\sqrt {x+{\tfrac {3}{8}}}}\cdot {\frac {m^{x}e^{-m}}{x!}}\right)\mapsto m}
should be used. A closed-form approximation of this exact unbiased inverse is
y
↦
1
4
y
2
−
1
8
+
1
4
3
2
y
−
1
−
11
8
y
−
2
+
5
8
3
2
y
−
3
.
{\displaystyle y\mapsto {\frac {1}{4}}y^{2}-{\frac {1}{8}}+{\frac {1}{4}}{\sqrt {\frac {3}{2}}}y^{-1}-{\frac {11}{8}}y^{-2}+{\frac {5}{8}}{\sqrt {\frac {3}{2}}}y^{-3}.}
== Alternatives ==
There are many other possible variance-stabilizing transformations for the Poisson distribution. Bar-Lev and Enis report a family of such transformations which includes the Anscombe transform. Another member of the family is the Freeman-Tukey transformation
A
:
x
↦
x
+
1
+
x
.
{\displaystyle A:x\mapsto {\sqrt {x+1}}+{\sqrt {x}}.\,}
A simplified transformation, obtained as the primitive of the reciprocal of the standard deviation of the data, is
A
:
x
↦
2
x
{\displaystyle A:x\mapsto 2{\sqrt {x}}\,}
which, while it is not quite so good at stabilizing the variance, has the advantage of being more easily understood.
Indeed, from the delta method,
V
[
2
x
]
≈
(
d
(
2
m
)
d
m
)
2
V
[
x
]
=
(
1
m
)
2
m
=
1
{\displaystyle V[2{\sqrt {x}}]\approx \left({\frac {d(2{\sqrt {m}})}{dm}}\right)^{2}V[x]=\left({\frac {1}{\sqrt {m}}}\right)^{2}m=1}
.
== Generalization ==
While the Anscombe transform is appropriate for pure Poisson data, in many applications the data presents also an additive Gaussian component. These cases are treated by a Generalized Anscombe transform and its asymptotically unbiased or exact unbiased inverses.
== See also ==
Variance-stabilizing transformation
Box–Cox transformation
== References ==
== Further reading ==
Starck, J.-L.; Murtagh, F. (2001), "Astronomical image and signal processing: looking at noise, information and scale", Signal Processing Magazine, IEEE, vol. 18, no. 2, pp. 30–40, Bibcode:2001ISPM...18...30S, doi:10.1109/79.916319, S2CID 13210703 | Wikipedia/Anscombe_transform |
In probability theory and statistics, the factorial moment generating function (FMGF) of the probability distribution of a real-valued random variable X is defined as
M
X
(
t
)
=
E
[
t
X
]
{\displaystyle M_{X}(t)=\operatorname {E} {\bigl [}t^{X}{\bigr ]}}
for all complex numbers t for which this expected value exists. This is the case at least for all t on the unit circle
|
t
|
=
1
{\displaystyle |t|=1}
, see characteristic function. If X is a discrete random variable taking values only in the set {0,1, ...} of non-negative integers, then
M
X
{\displaystyle M_{X}}
is also called probability-generating function (PGF) of X and
M
X
(
t
)
{\displaystyle M_{X}(t)}
is well-defined at least for all t on the closed unit disk
|
t
|
≤
1
{\displaystyle |t|\leq 1}
.
The factorial moment generating function generates the factorial moments of the probability distribution.
Provided
M
X
{\displaystyle M_{X}}
exists in a neighbourhood of t = 1, the nth factorial moment is given by
E
[
(
X
)
n
]
=
M
X
(
n
)
(
1
)
=
d
n
d
t
n
|
t
=
1
M
X
(
t
)
,
{\displaystyle \operatorname {E} [(X)_{n}]=M_{X}^{(n)}(1)=\left.{\frac {\mathrm {d} ^{n}}{\mathrm {d} t^{n}}}\right|_{t=1}M_{X}(t),}
where the Pochhammer symbol (x)n is the falling factorial
(
x
)
n
=
x
(
x
−
1
)
(
x
−
2
)
⋯
(
x
−
n
+
1
)
.
{\displaystyle (x)_{n}=x(x-1)(x-2)\cdots (x-n+1).\,}
(Many mathematicians, especially in the field of special functions, use the same notation to represent the rising factorial.)
== Examples ==
=== Poisson distribution ===
Suppose X has a Poisson distribution with expected value λ, then its factorial moment generating function is
M
X
(
t
)
=
∑
k
=
0
∞
t
k
P
(
X
=
k
)
⏟
=
λ
k
e
−
λ
/
k
!
=
e
−
λ
∑
k
=
0
∞
(
t
λ
)
k
k
!
=
e
λ
(
t
−
1
)
,
t
∈
C
,
{\displaystyle M_{X}(t)=\sum _{k=0}^{\infty }t^{k}\underbrace {\operatorname {P} (X=k)} _{=\,\lambda ^{k}e^{-\lambda }/k!}=e^{-\lambda }\sum _{k=0}^{\infty }{\frac {(t\lambda )^{k}}{k!}}=e^{\lambda (t-1)},\qquad t\in \mathbb {C} ,}
(use the definition of the exponential function) and thus we have
E
[
(
X
)
n
]
=
λ
n
.
{\displaystyle \operatorname {E} [(X)_{n}]=\lambda ^{n}.}
== See also ==
Moment (mathematics)
Moment-generating function
Cumulant-generating function
== References == | Wikipedia/Factorial_moment_generating_function |
A hidden Markov model (HMM) is a Markov model in which the observations are dependent on a latent (or hidden) Markov process (referred to as
X
{\displaystyle X}
). An HMM requires that there be an observable process
Y
{\displaystyle Y}
whose outcomes depend on the outcomes of
X
{\displaystyle X}
in a known way. Since
X
{\displaystyle X}
cannot be observed directly, the goal is to learn about state of
X
{\displaystyle X}
by observing
Y
{\displaystyle Y}
. By definition of being a Markov model, an HMM has an additional requirement that the outcome of
Y
{\displaystyle Y}
at time
t
=
t
0
{\displaystyle t=t_{0}}
must be "influenced" exclusively by the outcome of
X
{\displaystyle X}
at
t
=
t
0
{\displaystyle t=t_{0}}
and that the outcomes of
X
{\displaystyle X}
and
Y
{\displaystyle Y}
at
t
<
t
0
{\displaystyle t<t_{0}}
must be conditionally independent of
Y
{\displaystyle Y}
at
t
=
t
0
{\displaystyle t=t_{0}}
given
X
{\displaystyle X}
at time
t
=
t
0
{\displaystyle t=t_{0}}
. Estimation of the parameters in an HMM can be performed using maximum likelihood estimation. For linear chain HMMs, the Baum–Welch algorithm can be used to estimate parameters.
Hidden Markov models are known for their applications to thermodynamics, statistical mechanics, physics, chemistry, economics, finance, signal processing, information theory, pattern recognition—such as speech, handwriting, gesture recognition, part-of-speech tagging, musical score following, partial discharges and bioinformatics.
== Definition ==
Let
X
n
{\displaystyle X_{n}}
and
Y
n
{\displaystyle Y_{n}}
be discrete-time stochastic processes and
n
≥
1
{\displaystyle n\geq 1}
. The pair
(
X
n
,
Y
n
)
{\displaystyle (X_{n},Y_{n})}
is a hidden Markov model if
X
n
{\displaystyle X_{n}}
is a Markov process whose behavior is not directly observable ("hidden");
P
(
Y
n
∈
A
|
X
1
=
x
1
,
…
,
X
n
=
x
n
)
=
P
(
Y
n
∈
A
|
X
n
=
x
n
)
{\displaystyle \operatorname {\mathbf {P} } {\bigl (}Y_{n}\in A\ {\bigl |}\ X_{1}=x_{1},\ldots ,X_{n}=x_{n}{\bigr )}=\operatorname {\mathbf {P} } {\bigl (}Y_{n}\in A\ {\bigl |}\ X_{n}=x_{n}{\bigr )}}
,
for every
n
≥
1
{\displaystyle n\geq 1}
,
x
1
,
…
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}}
, and every Borel set
A
{\displaystyle A}
.
Let
X
t
{\displaystyle X_{t}}
and
Y
t
{\displaystyle Y_{t}}
be continuous-time stochastic processes. The pair
(
X
t
,
Y
t
)
{\displaystyle (X_{t},Y_{t})}
is a hidden Markov model if
X
t
{\displaystyle X_{t}}
is a Markov process whose behavior is not directly observable ("hidden");
P
(
Y
t
0
∈
A
∣
{
X
t
∈
B
t
}
t
≤
t
0
)
=
P
(
Y
t
0
∈
A
∣
X
t
0
∈
B
t
0
)
{\displaystyle \operatorname {\mathbf {P} } (Y_{t_{0}}\in A\mid \{X_{t}\in B_{t}\}_{t\leq t_{0}})=\operatorname {\mathbf {P} } (Y_{t_{0}}\in A\mid X_{t_{0}}\in B_{t_{0}})}
,
for every
t
0
{\displaystyle t_{0}}
, every Borel set
A
{\displaystyle A}
, and every family of Borel sets
{
B
t
}
t
≤
t
0
{\displaystyle \{B_{t}\}_{t\leq t_{0}}}
.
=== Terminology ===
The states of the process
X
n
{\displaystyle X_{n}}
(resp.
X
t
)
{\displaystyle X_{t})}
are called hidden states, and
P
(
Y
n
∈
A
∣
X
n
=
x
n
)
{\displaystyle \operatorname {\mathbf {P} } {\bigl (}Y_{n}\in A\mid X_{n}=x_{n}{\bigr )}}
(resp.
P
(
Y
t
∈
A
∣
X
t
∈
B
t
)
)
{\displaystyle \operatorname {\mathbf {P} } {\bigl (}Y_{t}\in A\mid X_{t}\in B_{t}{\bigr )})}
is called emission probability or output probability.
== Examples ==
=== Drawing balls from hidden urns ===
In its discrete form, a hidden Markov process can be visualized as a generalization of the urn problem with replacement (where each item from the urn is returned to the original urn before the next step). Consider this example: in a room that is not visible to an observer there is a genie. The room contains urns X1, X2, X3, ... each of which contains a known mix of balls, with each ball having a unique label y1, y2, y3, ... . The genie chooses an urn in that room and randomly draws a ball from that urn. It then puts the ball onto a conveyor belt, where the observer can observe the sequence of the balls but not the sequence of urns from which they were drawn. The genie has some procedure to choose urns; the choice of the urn for the n-th ball depends only upon a random number and the choice of the urn for the (n − 1)-th ball. The choice of urn does not directly depend on the urns chosen before this single previous urn; therefore, this is called a Markov process. It can be described by the upper part of Figure 1.
The Markov process cannot be observed, only the sequence of labeled balls, thus this arrangement is called a hidden Markov process. This is illustrated by the lower part of the diagram shown in Figure 1, where one can see that balls y1, y2, y3, y4 can be drawn at each state. Even if the observer knows the composition of the urns and has just observed a sequence of three balls, e.g. y1, y2 and y3 on the conveyor belt, the observer still cannot be sure which urn (i.e., at which state) the genie has drawn the third ball from. However, the observer can work out other information, such as the likelihood that the third ball came from each of the urns.
=== Weather guessing game ===
Consider two friends, Alice and Bob, who live far apart from each other and who talk together daily over the telephone about what they did that day. Bob is only interested in three activities: walking in the park, shopping, and cleaning his apartment. The choice of what to do is determined exclusively by the weather on a given day. Alice has no definite information about the weather, but she knows general trends. Based on what Bob tells her he did each day, Alice tries to guess what the weather must have been like.
Alice believes that the weather operates as a discrete Markov chain. There are two states, "Rainy" and "Sunny", but she cannot observe them directly, that is, they are hidden from her. On each day, there is a certain chance that Bob will perform one of the following activities, depending on the weather: "walk", "shop", or "clean". Since Bob tells Alice about his activities, those are the observations. The entire system is that of a hidden Markov model (HMM).
Alice knows the general weather trends in the area, and what Bob likes to do on average. In other words, the parameters of the HMM are known. They can be represented as follows in Python:
In this piece of code, start_probability represents Alice's belief about which state the HMM is in when Bob first calls her (all she knows is that it tends to be rainy on average). The particular probability distribution used here is not the equilibrium one, which is (given the transition probabilities) approximately {'Rainy': 0.57, 'Sunny': 0.43}. The transition_probability represents the change of the weather in the underlying Markov chain. In this example, there is only a 30% chance that tomorrow will be sunny if today is rainy. The emission_probability represents how likely Bob is to perform a certain activity on each day. If it is rainy, there is a 50% chance that he is cleaning his apartment; if it is sunny, there is a 60% chance that he is outside for a walk.
A similar example is further elaborated in the Viterbi algorithm page.
== Structural architecture ==
The diagram below shows the general architecture of an instantiated HMM. Each oval shape represents a random variable that can adopt any of a number of values. The random variable x(t) is the hidden state at time t (with the model from the above diagram, x(t) ∈ { x1, x2, x3 }). The random variable y(t) is the observation at time t (with y(t) ∈ { y1, y2, y3, y4 }). The arrows in the diagram (often called a trellis diagram) denote conditional dependencies.
From the diagram, it is clear that the conditional probability distribution of the hidden variable x(t) at time t, given the values of the hidden variable x at all times, depends only on the value of the hidden variable x(t − 1); the values at time t − 2 and before have no influence. This is called the Markov property. Similarly, the value of the observed variable y(t) depends on only the value of the hidden variable x(t) (both at time t).
In the standard type of hidden Markov model considered here, the state space of the hidden variables is discrete, while the observations themselves can either be discrete (typically generated from a categorical distribution) or continuous (typically from a Gaussian distribution). The parameters of a hidden Markov model are of two types, transition probabilities and emission probabilities (also known as output probabilities). The transition probabilities control the way the hidden state at time t is chosen given the hidden state at time
t
−
1
{\displaystyle t-1}
.
The hidden state space is assumed to consist of one of N possible values, modelled as a categorical distribution. (See the section below on extensions for other possibilities.) This means that for each of the N possible states that a hidden variable at time t can be in, there is a transition probability from this state to each of the N possible states of the hidden variable at time
t
+
1
{\displaystyle t+1}
, for a total of
N
2
{\displaystyle N^{2}}
transition probabilities. The set of transition probabilities for transitions from any given state must sum to 1. Thus, the
N
×
N
{\displaystyle N\times N}
matrix of transition probabilities is a Markov matrix. Because any transition probability can be determined once the others are known, there are a total of
N
(
N
−
1
)
{\displaystyle N(N-1)}
transition parameters.
In addition, for each of the N possible states, there is a set of emission probabilities governing the distribution of the observed variable at a particular time given the state of the hidden variable at that time. The size of this set depends on the nature of the observed variable. For example, if the observed variable is discrete with M possible values, governed by a categorical distribution, there will be
M
−
1
{\displaystyle M-1}
separate parameters, for a total of
N
(
M
−
1
)
{\displaystyle N(M-1)}
emission parameters over all hidden states. On the other hand, if the observed variable is an M-dimensional vector distributed according to an arbitrary multivariate Gaussian distribution, there will be M parameters controlling the means and
M
(
M
+
1
)
2
{\displaystyle {\frac {M(M+1)}{2}}}
parameters controlling the covariance matrix, for a total of
N
(
M
+
M
(
M
+
1
)
2
)
=
N
M
(
M
+
3
)
2
=
O
(
N
M
2
)
{\displaystyle N\left(M+{\frac {M(M+1)}{2}}\right)={\frac {NM(M+3)}{2}}=O(NM^{2})}
emission parameters. (In such a case, unless the value of M is small, it may be more practical to restrict the nature of the covariances between individual elements of the observation vector, e.g. by assuming that the elements are independent of each other, or less restrictively, are independent of all but a fixed number of adjacent elements.)
== Inference ==
Several inference problems are associated with hidden Markov models, as outlined below.
=== Probability of an observed sequence ===
The task is to compute in a best way, given the parameters of the model, the probability of a particular output sequence. This requires summation over all possible state sequences:
The probability of observing a sequence
Y
=
y
(
0
)
,
y
(
1
)
,
…
,
y
(
L
−
1
)
,
{\displaystyle Y=y(0),y(1),\dots ,y(L-1),}
of length L is given by
P
(
Y
)
=
∑
X
P
(
Y
∣
X
)
P
(
X
)
,
{\displaystyle P(Y)=\sum _{X}P(Y\mid X)P(X),}
where the sum runs over all possible hidden-node sequences
X
=
x
(
0
)
,
x
(
1
)
,
…
,
x
(
L
−
1
)
.
{\displaystyle X=x(0),x(1),\dots ,x(L-1).}
Applying the principle of dynamic programming, this problem, too, can be handled efficiently using the forward algorithm.
=== Probability of the latent variables ===
A number of related tasks ask about the probability of one or more of the latent variables, given the model's parameters and a sequence of observations
y
(
1
)
,
…
,
y
(
t
)
{\displaystyle y(1),\dots ,y(t)}
.
==== Filtering ====
The task is to compute, given the model's parameters and a sequence of observations, the distribution over hidden states of the last latent variable at the end of the sequence, i.e. to compute
P
(
x
(
t
)
∣
y
(
1
)
,
…
,
y
(
t
)
)
{\displaystyle P(x(t)\mid y(1),\dots ,y(t))}
. This task is used when the sequence of latent variables is thought of as the underlying states that a process moves through at a sequence of points in time, with corresponding observations at each point. Then, it is natural to ask about the state of the process at the end.
This problem can be handled efficiently using the forward algorithm. An example is when the algorithm is applied to a Hidden Markov Network to determine
P
(
h
t
∣
v
1
:
t
)
{\displaystyle \mathrm {P} {\big (}h_{t}\mid v_{1:t}{\big )}}
.
==== Smoothing ====
This is similar to filtering but asks about the distribution of a latent variable somewhere in the middle of a sequence, i.e. to compute
P
(
x
(
k
)
∣
y
(
1
)
,
…
,
y
(
t
)
)
{\displaystyle P(x(k)\mid y(1),\dots ,y(t))}
for some
k
<
t
{\displaystyle k<t}
. From the perspective described above, this can be thought of as the probability distribution over hidden states for a point in time k in the past, relative to time t.
The forward-backward algorithm is a good method for computing the smoothed values for all hidden state variables.
==== Most likely explanation ====
The task, unlike the previous two, asks about the joint probability of the entire sequence of hidden states that generated a particular sequence of observations (see illustration on the right). This task is generally applicable when HMM's are applied to different sorts of problems from those for which the tasks of filtering and smoothing are applicable. An example is part-of-speech tagging, where the hidden states represent the underlying parts of speech corresponding to an observed sequence of words. In this case, what is of interest is the entire sequence of parts of speech, rather than simply the part of speech for a single word, as filtering or smoothing would compute.
This task requires finding a maximum over all possible state sequences, and can be solved efficiently by the Viterbi algorithm.
=== Statistical significance ===
For some of the above problems, it may also be interesting to ask about statistical significance. What is the probability that a sequence drawn from some null distribution will have an HMM probability (in the case of the forward algorithm) or a maximum state sequence probability (in the case of the Viterbi algorithm) at least as large as that of a particular output sequence? When an HMM is used to evaluate the relevance of a hypothesis for a particular output sequence, the statistical significance indicates the false positive rate associated with failing to reject the hypothesis for the output sequence.
== Learning ==
The parameter learning task in HMMs is to find, given an output sequence or a set of such sequences, the best set of state transition and emission probabilities. The task is usually to derive the maximum likelihood estimate of the parameters of the HMM given the set of output sequences. No tractable algorithm is known for solving this problem exactly, but a local maximum likelihood can be derived efficiently using the Baum–Welch algorithm or the Baldi–Chauvin algorithm. The Baum–Welch algorithm is a special case of the expectation-maximization algorithm.
If the HMMs are used for time series prediction, more sophisticated Bayesian inference methods, like Markov chain Monte Carlo (MCMC) sampling are proven to be favorable over finding a single maximum likelihood model both in terms of accuracy and stability. Since MCMC imposes significant computational burden, in cases where computational scalability is also of interest, one may alternatively resort to variational approximations to Bayesian inference, e.g. Indeed, approximate variational inference offers computational efficiency comparable to expectation-maximization, while yielding an accuracy profile only slightly inferior to exact MCMC-type Bayesian inference.
== Applications ==
HMMs can be applied in many fields where the goal is to recover a data sequence that is not immediately observable (but other data that depend on the sequence are). Applications include:
Computational finance
Single-molecule kinetic analysis
Neuroscience
Cryptanalysis
Speech recognition, including Siri
Speech synthesis
Part-of-speech tagging
Document separation in scanning solutions
Machine translation
Partial discharge
Gene prediction
Handwriting recognition
Alignment of bio-sequences
Time series analysis
Activity recognition
Protein folding
Sequence classification
Metamorphic virus detection
Sequence motif discovery (DNA and proteins)
DNA hybridization kinetics
Chromatin state discovery
Transportation forecasting
Solar irradiance variability
== History ==
Hidden Markov models were described in a series of statistical papers by Leonard E. Baum and other authors in the second half of the 1960s. One of the first applications of HMMs was speech recognition, starting in the mid-1970s. From the linguistics point of view, hidden Markov models are equivalent to stochastic regular grammar.
In the second half of the 1980s, HMMs began to be applied to the analysis of biological sequences, in particular DNA. Since then, they have become ubiquitous in the field of bioinformatics.
== Extensions ==
=== General state spaces ===
In the hidden Markov models considered above, the state space of the hidden variables is discrete, while the observations themselves can either be discrete (typically generated from a categorical distribution) or continuous (typically from a Gaussian distribution). Hidden Markov models can also be generalized to allow continuous state spaces. Examples of such models are those where the Markov process over hidden variables is a linear dynamical system, with a linear relationship among related variables and where all hidden and observed variables follow a Gaussian distribution. In simple cases, such as the linear dynamical system just mentioned, exact inference is tractable (in this case, using the Kalman filter); however, in general, exact inference in HMMs with continuous latent variables is infeasible, and approximate methods must be used, such as the extended Kalman filter or the particle filter.
Nowadays, inference in hidden Markov models is performed in nonparametric settings, where the dependency structure enables identifiability of the model and the learnability limits are still under exploration.
=== Bayesian modeling of the transitions probabilities ===
Hidden Markov models are generative models, in which the joint distribution of observations and hidden states, or equivalently both the prior distribution of hidden states (the transition probabilities) and conditional distribution of observations given states (the emission probabilities), is modeled. The above algorithms implicitly assume a uniform prior distribution over the transition probabilities. However, it is also possible to create hidden Markov models with other types of prior distributions. An obvious candidate, given the categorical distribution of the transition probabilities, is the Dirichlet distribution, which is the conjugate prior distribution of the categorical distribution. Typically, a symmetric Dirichlet distribution is chosen, reflecting ignorance about which states are inherently more likely than others. The single parameter of this distribution (termed the concentration parameter) controls the relative density or sparseness of the resulting transition matrix. A choice of 1 yields a uniform distribution. Values greater than 1 produce a dense matrix, in which the transition probabilities between pairs of states are likely to be nearly equal. Values less than 1 result in a sparse matrix in which, for each given source state, only a small number of destination states have non-negligible transition probabilities. It is also possible to use a two-level prior Dirichlet distribution, in which one Dirichlet distribution (the upper distribution) governs the parameters of another Dirichlet distribution (the lower distribution), which in turn governs the transition probabilities. The upper distribution governs the overall distribution of states, determining how likely each state is to occur; its concentration parameter determines the density or sparseness of states. Such a two-level prior distribution, where both concentration parameters are set to produce sparse distributions, might be useful for example in unsupervised part-of-speech tagging, where some parts of speech occur much more commonly than others; learning algorithms that assume a uniform prior distribution generally perform poorly on this task. The parameters of models of this sort, with non-uniform prior distributions, can be learned using Gibbs sampling or extended versions of the expectation-maximization algorithm.
An extension of the previously described hidden Markov models with Dirichlet priors uses a Dirichlet process in place of a Dirichlet distribution. This type of model allows for an unknown and potentially infinite number of states. It is common to use a two-level Dirichlet process, similar to the previously described model with two levels of Dirichlet distributions. Such a model is called a hierarchical Dirichlet process hidden Markov model, or HDP-HMM for short. It was originally described under the name "Infinite Hidden Markov Model" and was further formalized in "Hierarchical Dirichlet Processes".
=== Discriminative approach ===
A different type of extension uses a discriminative model in place of the generative model of standard HMMs. This type of model directly models the conditional distribution of the hidden states given the observations, rather than modeling the joint distribution. An example of this model is the so-called maximum entropy Markov model (MEMM), which models the conditional distribution of the states using logistic regression (also known as a "maximum entropy model"). The advantage of this type of model is that arbitrary features (i.e. functions) of the observations can be modeled, allowing domain-specific knowledge of the problem at hand to be injected into the model. Models of this sort are not limited to modeling direct dependencies between a hidden state and its associated observation; rather, features of nearby observations, of combinations of the associated observation and nearby observations, or in fact of arbitrary observations at any distance from a given hidden state can be included in the process used to determine the value of a hidden state. Furthermore, there is no need for these features to be statistically independent of each other, as would be the case if such features were used in a generative model. Finally, arbitrary features over pairs of adjacent hidden states can be used rather than simple transition probabilities. The disadvantages of such models are: (1) The types of prior distributions that can be placed on hidden states are severely limited; (2) It is not possible to predict the probability of seeing an arbitrary observation. This second limitation is often not an issue in practice, since many common usages of HMM's do not require such predictive probabilities.
A variant of the previously described discriminative model is the linear-chain conditional random field. This uses an undirected graphical model (aka Markov random field) rather than the directed graphical models of MEMM's and similar models. The advantage of this type of model is that it does not suffer from the so-called label bias problem of MEMM's, and thus may make more accurate predictions. The disadvantage is that training can be slower than for MEMM's.
=== Other extensions ===
Yet another variant is the factorial hidden Markov model, which allows for a single observation to be conditioned on the corresponding hidden variables of a set of
K
{\displaystyle K}
independent Markov chains, rather than a single Markov chain. It is equivalent to a single HMM, with
N
K
{\displaystyle N^{K}}
states (assuming there are
N
{\displaystyle N}
states for each chain), and therefore, learning in such a model is difficult: for a sequence of length
T
{\displaystyle T}
, a straightforward Viterbi algorithm has complexity
O
(
N
2
K
T
)
{\displaystyle O(N^{2K}\,T)}
. To find an exact solution, a junction tree algorithm could be used, but it results in an
O
(
N
K
+
1
K
T
)
{\displaystyle O(N^{K+1}\,K\,T)}
complexity. In practice, approximate techniques, such as variational approaches, could be used.
All of the above models can be extended to allow for more distant dependencies among hidden states, e.g. allowing for a given state to be dependent on the previous two or three states rather than a single previous state; i.e. the transition probabilities are extended to encompass sets of three or four adjacent states (or in general
K
{\displaystyle K}
adjacent states). The disadvantage of such models is that dynamic-programming algorithms for training them have an
O
(
N
K
T
)
{\displaystyle O(N^{K}\,T)}
running time, for
K
{\displaystyle K}
adjacent states and
T
{\displaystyle T}
total observations (i.e. a length-
T
{\displaystyle T}
Markov chain). This extension has been widely used in bioinformatics, in the modeling of DNA sequences.
Another recent extension is the triplet Markov model, in which an auxiliary underlying process is added to model some data specificities. Many variants of this model have been proposed. One should also mention the interesting link that has been established between the theory of evidence and the triplet Markov models and which allows to fuse data in Markovian context and to model nonstationary data. Alternative multi-stream data fusion strategies have also been proposed in recent literature, e.g.,
Finally, a different rationale towards addressing the problem of modeling nonstationary data by means of hidden Markov models was suggested in 2012. It consists in employing a small recurrent neural network (RNN), specifically a reservoir network, to capture the evolution of the temporal dynamics in the observed data. This information, encoded in the form of a high-dimensional vector, is used as a conditioning variable of the HMM state transition probabilities. Under such a setup, eventually is obtained a nonstationary HMM, the transition probabilities of which evolve over time in a manner that is inferred from the data, in contrast to some unrealistic ad-hoc model of temporal evolution.
In 2023, two innovative algorithms were introduced for the Hidden Markov Model. These algorithms enable the computation of the posterior distribution of the HMM without the necessity of explicitly modeling the joint distribution, utilizing only the conditional distributions. Unlike traditional methods such as the Forward-Backward and Viterbi algorithms, which require knowledge of the joint law of the HMM and can be computationally intensive to learn, the Discriminative Forward-Backward and Discriminative Viterbi algorithms circumvent the need for the observation's law. This breakthrough allows the HMM to be applied as a discriminative model, offering a more efficient and versatile approach to leveraging Hidden Markov Models in various applications.
The model suitable in the context of longitudinal data is named latent Markov model. The basic version of this model has been extended to include individual covariates, random effects and to model more complex data structures such as multilevel data. A complete overview of the latent Markov models, with special attention to the model assumptions and to their practical use is provided in
== Measure theory ==
Given a Markov transition matrix and an invariant distribution on the states, a probability measure can be imposed on the set of subshifts. For example, consider the Markov chain given on the left on the states
A
,
B
1
,
B
2
{\displaystyle A,B_{1},B_{2}}
, with invariant distribution
π
=
(
2
/
7
,
4
/
7
,
1
/
7
)
{\displaystyle \pi =(2/7,4/7,1/7)}
. By ignoring the distinction between
B
1
,
B
2
{\displaystyle B_{1},B_{2}}
, this space of subshifts is projected on
A
,
B
1
,
B
2
{\displaystyle A,B_{1},B_{2}}
into another space of subshifts on
A
,
B
{\displaystyle A,B}
, and this projection also projects the probability measure down to a probability measure on the subshifts on
A
,
B
{\displaystyle A,B}
.
The curious thing is that the probability measure on the subshifts on
A
,
B
{\displaystyle A,B}
is not created by a Markov chain on
A
,
B
{\displaystyle A,B}
, not even multiple orders. Intuitively, this is because if one observes a long sequence of
B
n
{\displaystyle B^{n}}
, then one would become increasingly sure that the
Pr
(
A
∣
B
n
)
→
2
3
{\displaystyle \Pr(A\mid B^{n})\to {\frac {2}{3}}}
, meaning that the observable part of the system can be affected by something infinitely in the past.
Conversely, there exists a space of subshifts on 6 symbols, projected to subshifts on 2 symbols, such that any Markov measure on the smaller subshift has a preimage measure that is not Markov of any order (example 2.6).
== See also ==
== References ==
== External links ==
=== Concepts ===
Teif, V. B.; Rippe, K. (2010). "Statistical–mechanical lattice models for protein–DNA binding in chromatin". J. Phys.: Condens. Matter. 22 (41): 414105. arXiv:1004.5514. Bibcode:2010JPCM...22O4105T. doi:10.1088/0953-8984/22/41/414105. PMID 21386588. S2CID 103345.
A Revealing Introduction to Hidden Markov Models by Mark Stamp, San Jose State University.
Fitting HMM's with expectation-maximization – complete derivation
A step-by-step tutorial on HMMs Archived 2017-08-13 at the Wayback Machine (University of Leeds)
Hidden Markov Models (an exposition using basic mathematics)
Hidden Markov Models (by Narada Warakagoda)
Hidden Markov Models: Fundamentals and Applications Part 1, Part 2 (by V. Petrushin)
Lecture on a Spreadsheet by Jason Eisner, Video and interactive spreadsheet | Wikipedia/Poisson_hidden_Markov_model |
The Bayes factor is a ratio of two competing statistical models represented by their evidence, and is used to quantify the support for one model over the other. The models in question can have a common set of parameters, such as a null hypothesis and an alternative, but this is not necessary; for instance, it could also be a non-linear model compared to its linear approximation. The Bayes factor can be thought of as a Bayesian analog to the likelihood-ratio test, although it uses the integrated (i.e., marginal) likelihood rather than the maximized likelihood. As such, both quantities only coincide under simple hypotheses (e.g., two specific parameter values). Also, in contrast with null hypothesis significance testing, Bayes factors support evaluation of evidence in favor of a null hypothesis, rather than only allowing the null to be rejected or not rejected.
Although conceptually simple, the computation of the Bayes factor can be challenging depending on the complexity of the model and the hypotheses. Since closed-form expressions of the marginal likelihood are generally not available, numerical approximations based on MCMC samples have been suggested. A widely used approach is the method proposed by Chib (1995). Chib and Jeliazkov (2001) later extended this method to handle cases where Metropolis-Hastings samplers are used. For certain special cases, simplified algebraic expressions can be derived; for instance, the Savage–Dickey density ratio in the case of a precise (equality constrained) hypothesis against an unrestricted alternative. Another approximation, derived by applying Laplace's approximation to the integrated likelihoods, is known as the Bayesian information criterion (BIC); in large data sets the Bayes factor will approach the BIC as the influence of the priors wanes. In small data sets, priors generally matter and must not be improper since the Bayes factor will be undefined if either of the two integrals in its ratio is not finite.
== Definition ==
The Bayes factor is the ratio of two marginal likelihoods; that is, the likelihoods of two statistical models integrated over the prior probabilities of their parameters.
The posterior probability
Pr
(
M
|
D
)
{\displaystyle \Pr(M|D)}
of a model M given data D is given by Bayes' theorem:
Pr
(
M
|
D
)
=
Pr
(
D
|
M
)
Pr
(
M
)
Pr
(
D
)
.
{\displaystyle \Pr(M|D)={\frac {\Pr(D|M)\Pr(M)}{\Pr(D)}}.}
The key data-dependent term
Pr
(
D
|
M
)
{\displaystyle \Pr(D|M)}
represents the probability that some data are produced under the assumption of the model M; evaluating it correctly is the key to Bayesian model comparison.
Given a model selection problem in which one wishes to choose between two models on the basis of observed data D, the plausibility of the two different models M1 and M2, parametrised by model parameter vectors
θ
1
{\displaystyle \theta _{1}}
and
θ
2
{\displaystyle \theta _{2}}
, is assessed by the Bayes factor K given by
K
=
Pr
(
D
|
M
1
)
Pr
(
D
|
M
2
)
=
∫
Pr
(
θ
1
|
M
1
)
Pr
(
D
|
θ
1
,
M
1
)
d
θ
1
∫
Pr
(
θ
2
|
M
2
)
Pr
(
D
|
θ
2
,
M
2
)
d
θ
2
=
Pr
(
M
1
|
D
)
Pr
(
D
)
Pr
(
M
1
)
Pr
(
M
2
|
D
)
Pr
(
D
)
Pr
(
M
2
)
=
Pr
(
M
1
|
D
)
Pr
(
M
2
|
D
)
Pr
(
M
2
)
Pr
(
M
1
)
.
{\displaystyle K={\frac {\Pr(D|M_{1})}{\Pr(D|M_{2})}}={\frac {\int \Pr(\theta _{1}|M_{1})\Pr(D|\theta _{1},M_{1})\,d\theta _{1}}{\int \Pr(\theta _{2}|M_{2})\Pr(D|\theta _{2},M_{2})\,d\theta _{2}}}={\frac {\frac {\Pr(M_{1}|D)\Pr(D)}{\Pr(M_{1})}}{\frac {\Pr(M_{2}|D)\Pr(D)}{\Pr(M_{2})}}}={\frac {\Pr(M_{1}|D)}{\Pr(M_{2}|D)}}{\frac {\Pr(M_{2})}{\Pr(M_{1})}}.}
When the two models have equal prior probability, so that
Pr
(
M
1
)
=
Pr
(
M
2
)
{\displaystyle \Pr(M_{1})=\Pr(M_{2})}
, the Bayes factor is equal to the ratio of the posterior probabilities of M1 and M2. If instead of the Bayes factor integral, the likelihood corresponding to the maximum likelihood estimate of the parameter for each statistical model is used, then the test becomes a classical likelihood-ratio test. Unlike a likelihood-ratio test, this Bayesian model comparison does not depend on any single set of parameters, as it integrates over all parameters in each model (with respect to the respective priors). An advantage of the use of Bayes factors is that it automatically, and quite naturally, includes a penalty for including too much model structure. It thus guards against overfitting. For models where an explicit version of the likelihood is not available or too costly to evaluate numerically, approximate Bayesian computation can be used for model selection in a Bayesian framework,
with the caveat that approximate-Bayesian estimates of Bayes factors are often biased.
Other approaches are:
to treat model comparison as a decision problem, computing the expected value or cost of each model choice;
to use minimum message length (MML).
to use minimum description length (MDL).
== Interpretation ==
A value of K > 1 means that M1 is more strongly supported by the data under consideration than M2. Note that classical hypothesis testing gives one hypothesis (or model) preferred status (the 'null hypothesis'), and only considers evidence against it. The fact that a Bayes factor can produce evidence for and not just against a null hypothesis is one of the key advantages of this analysis method.
Harold Jeffreys gave a scale (Jeffreys' scale) for interpretation of
K
{\displaystyle K}
:
The second column gives the corresponding weights of evidence in decihartleys (also known as decibans); bits are added in the third column for clarity. The table continues in the other direction, so that, for example,
K
≤
10
−
2
{\displaystyle K\leq 10^{-2}}
is decisive evidence for
M
2
{\displaystyle M_{2}}
.
An alternative table, widely cited, is provided by Kass and Raftery (1995):
According to I. J. Good, the just-noticeable difference of humans in their everyday life, when it comes to a change degree of belief in a hypothesis, is about a factor of 1.3x, or 1 deciban, or 1/3 of a bit, or from 1:1 to 5:4 in odds ratio.
== Example ==
Suppose we have a random variable that produces either a success or a failure. We want to compare a model M1 where the probability of success is q = 1⁄2, and another model M2 where q is unknown and we take a prior distribution for q that is uniform on [0,1]. We take a sample of 200, and find 115 successes and 85 failures. The likelihood can be calculated according to the binomial distribution:
(
200
115
)
q
115
(
1
−
q
)
85
.
{\displaystyle {{200 \choose 115}q^{115}(1-q)^{85}}.}
Thus we have for M1
P
(
X
=
115
∣
M
1
)
=
(
200
115
)
(
1
2
)
200
≈
0.006
{\displaystyle P(X=115\mid M_{1})={200 \choose 115}\left({1 \over 2}\right)^{200}\approx 0.006}
whereas for M2 we have
P
(
X
=
115
∣
M
2
)
=
∫
0
1
(
200
115
)
q
115
(
1
−
q
)
85
d
q
=
1
201
≈
0.005
{\displaystyle P(X=115\mid M_{2})=\int _{0}^{1}{200 \choose 115}q^{115}(1-q)^{85}dq={1 \over 201}\approx 0.005}
The ratio is then 1.2, which is "barely worth mentioning" even if it points very slightly towards M1.
A frequentist hypothesis test of M1 (here considered as a null hypothesis) would have produced a very different result. Such a test says that M1 should be rejected at the 5% significance level, since the probability of getting 115 or more successes from a sample of 200 if q = 1⁄2 is 0.02, and as a two-tailed test of getting a figure as extreme as or more extreme than 115 is 0.04. Note that 115 is more than two standard deviations away from 100. Thus, whereas a frequentist hypothesis test would yield significant results at the 5% significance level, the Bayes factor hardly considers this to be an extreme result. Note, however, that a non-uniform prior (for example one that reflects the fact that you expect the number of success and failures to be of the same order of magnitude) could result in a Bayes factor that is more in agreement with the frequentist hypothesis test.
A classical likelihood-ratio test would have found the maximum likelihood estimate for q, namely
q
^
=
115
200
=
0.575
{\displaystyle {\hat {q}}={\frac {115}{200}}=0.575}
, whence
P
(
X
=
115
∣
M
2
)
=
(
200
115
)
q
^
115
(
1
−
q
^
)
85
≈
0.06
{\displaystyle \textstyle P(X=115\mid M_{2})={{200 \choose 115}{\hat {q}}^{115}(1-{\hat {q}})^{85}}\approx 0.06}
(rather than averaging over all possible q). That gives a likelihood ratio of 0.1 and points towards M2.
M2 is a more complex model than M1 because it has a free parameter which allows it to model the data more closely. The ability of Bayes factors to take this into account is a reason why Bayesian inference has been put forward as a theoretical justification for and generalisation of Occam's razor, reducing Type I errors.
On the other hand, the modern method of relative likelihood takes into account the number of free parameters in the models, unlike the classical likelihood ratio. The relative likelihood method could be applied as follows. Model M1 has 0 parameters, and so its Akaike information criterion (AIC) value is
2
⋅
0
−
2
⋅
ln
(
0.005956
)
≈
10.2467
{\displaystyle 2\cdot 0-2\cdot \ln(0.005956)\approx 10.2467}
. Model M2 has 1 parameter, and so its AIC value is
2
⋅
1
−
2
⋅
ln
(
0.056991
)
≈
7.7297
{\displaystyle 2\cdot 1-2\cdot \ln(0.056991)\approx 7.7297}
. Hence M1 is about
exp
(
7.7297
−
10.2467
2
)
≈
0.284
{\displaystyle \exp \left({\frac {7.7297-10.2467}{2}}\right)\approx 0.284}
times as probable as M2 to minimize the information loss. Thus M2 is slightly preferred, but M1 cannot be excluded.
== See also ==
Akaike information criterion
Approximate Bayesian computation
Bayesian information criterion
Deviance information criterion
Lindley's paradox
Minimum message length
Model selection
E-Value
Statistical ratios
Odds ratio
Relative risk
== References ==
== Further reading ==
Bernardo, J.; Smith, A. F. M. (1994). Bayesian Theory. John Wiley. ISBN 0-471-92416-4.
Denison, D. G. T.; Holmes, C. C.; Mallick, B. K.; Smith, A. F. M. (2002). Bayesian Methods for Nonlinear Classification and Regression. John Wiley. ISBN 0-471-49036-9.
Dienes, Z. (2019). How do I know what my theory predicts? Advances in Methods and Practices in Psychological Science doi:10.1177/2515245919876960
Duda, Richard O.; Hart, Peter E.; Stork, David G. (2000). "Section 9.6.5". Pattern classification (2nd ed.). Wiley. pp. 487–489. ISBN 0-471-05669-3.
Gelman, A.; Carlin, J.; Stern, H.; Rubin, D. (1995). Bayesian Data Analysis. London: Chapman & Hall. ISBN 0-412-03991-5.
Jaynes, E. T. (1994), Probability Theory: the logic of science, chapter 24.
Kadane, Joseph B.; Dickey, James M. (1980). "Bayesian Decision Theory and the Simplification of Models". In Kmenta, Jan; Ramsey, James B. (eds.). Evaluation of Econometric Models. New York: Academic Press. pp. 245–268. ISBN 0-12-416550-8.
Lee, P. M. (2012). Bayesian Statistics: an introduction. Wiley. ISBN 9781118332573.
Richard, Mark; Vecer, Jan (2021). "Efficiency Testing of Prediction Markets: Martingale Approach, Likelihood Ratio and Bayes Factor Analysis". Risks. 9 (2): 31. doi:10.3390/risks9020031. hdl:10419/258120.
Winkler, Robert (2003). Introduction to Bayesian Inference and Decision (2nd ed.). Probabilistic. ISBN 0-9647938-4-9.
== External links ==
BayesFactor —an R package for computing Bayes factors in common research designs
Bayes factor calculator — Online calculator for informed Bayes factors
Bayes Factor Calculators Archived 2015-05-07 at the Wayback Machine —web-based version of much of the BayesFactor package | Wikipedia/Bayesian_model_comparison |
In probability theory, a degenerate distribution on a measure space
(
E
,
A
,
μ
)
{\displaystyle (E,{\mathcal {A}},\mu )}
is a probability distribution whose support is a null set with respect to
μ
{\displaystyle \mu }
. For instance, in the n-dimensional space ℝn endowed with the Lebesgue measure, any distribution concentrated on a d-dimensional subspace with d < n is a degenerate distribution on ℝn. This is essentially the same notion as a singular probability measure, but the term degenerate is typically used when the distribution arises as a limit of (non-degenerate) distributions.
When the support of a degenerate distribution consists of a single point a, this distribution is a Dirac measure in a: it is the distribution of a deterministic random variable equal to a with probability 1. This is a special case of a discrete distribution; its probability mass function equals 1 in a and 0 everywhere else.
In the case of a real-valued random variable, the cumulative distribution function of the degenerate distribution localized in a is
F
a
(
x
)
=
{
1
,
if
x
≥
a
0
,
if
x
<
a
{\displaystyle F_{a}(x)=\left\{{\begin{matrix}1,&{\mbox{if }}x\geq a\\0,&{\mbox{if }}x<a\end{matrix}}\right.}
Such degenerate distributions often arise as limits of continuous distributions whose variance goes to 0.
== Constant random variable ==
A constant random variable is a discrete random variable that takes a constant value, regardless of any event that occurs. This is technically different from an almost surely constant random variable, which may take other values, but only on events with probability zero:
Let X: Ω → ℝ be a real-valued random variable defined on a probability space (Ω, ℙ). Then X is an almost surely constant random variable if there exists
a
∈
R
{\displaystyle a\in \mathbb {R} }
such that
P
(
X
=
a
)
=
1
,
{\displaystyle \mathbb {P} (X=a)=1,}
and is furthermore a constant random variable if
X
(
ω
)
=
a
,
∀
ω
∈
Ω
.
{\displaystyle X(\omega )=a,\quad \forall \omega \in \Omega .}
A constant random variable is almost surely constant, but the converse is not true, since if X is almost surely constant then there may still exist γ ∈ Ω such that X(γ) ≠ a.
For practical purposes, the distinction between X being constant or almost surely constant is unimportant, since these two situation correspond to the same degenerate distribution: the Dirac measure.
== Higher dimensions ==
Degeneracy of a multivariate distribution in n random variables arises when the support lies in a space of dimension less than n. This occurs when at least one of the variables is a deterministic function of the others. For example, in the 2-variable case suppose that Y = aX + b for scalar random variables X and Y and scalar constants a ≠ 0 and b; here knowing the value of one of X or Y gives exact knowledge of the value of the other. All the possible points (x, y) fall on the one-dimensional line y = ax + b.
In general when one or more of n random variables are exactly linearly determined by the others, if the covariance matrix exists its rank is less than n and its determinant is 0, so it is positive semi-definite but not positive definite, and the joint probability distribution is degenerate.
Degeneracy can also occur even with non-zero covariance. For example, when scalar X is symmetrically distributed about 0 and Y is exactly given by Y = X2, all possible points (x, y) fall on the parabola y = x2, which is a one-dimensional subset of the two-dimensional space.
== References == | Wikipedia/Degenerate_distribution |
In information theory, the entropy power inequality (EPI) is a result that relates to so-called "entropy power" of random variables. It shows that the entropy power of suitably well-behaved random variables is a superadditive function. The entropy power inequality was proved in 1948 by Claude Shannon in his seminal paper "A Mathematical Theory of Communication". Shannon also provided a sufficient condition for equality to hold; Stam (1959) showed that the condition is in fact necessary.
== Statement of the inequality ==
For a random vector
X
:
Ω
→
R
n
{\displaystyle X:\Omega \to \mathbb {R} ^{n}}
with probability density function
f
:
R
n
→
R
{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }
, the differential entropy of
X
{\displaystyle X}
, denoted
h
(
X
)
{\displaystyle h(X)}
, is defined to be
h
(
X
)
=
−
∫
R
n
f
(
x
)
log
f
(
x
)
d
x
{\displaystyle h(X)=-\int _{\mathbb {R} ^{n}}f(x)\log f(x)\,dx}
and the entropy power of
X
{\displaystyle X}
, denoted
N
(
X
)
{\displaystyle N(X)}
, is defined to be
N
(
X
)
=
1
2
π
e
e
2
n
h
(
X
)
.
{\displaystyle N(X)={\frac {1}{2\pi e}}e^{{\frac {2}{n}}h(X)}.}
In particular,
N
(
X
)
=
|
K
|
1
/
n
{\displaystyle N(X)=|K|^{1/n}}
when
X
{\displaystyle X}
is normally distributed with covariance matrix
K
{\displaystyle K}
.
Let
X
{\displaystyle X}
and
Y
{\displaystyle Y}
be independent random variables with probability density functions in the
L
p
{\displaystyle L^{p}}
space
L
p
(
R
n
)
{\displaystyle L^{p}(\mathbb {R} ^{n})}
for some
p
>
1
{\displaystyle p>1}
. Then
N
(
X
+
Y
)
≥
N
(
X
)
+
N
(
Y
)
.
{\displaystyle N(X+Y)\geq N(X)+N(Y).}
Moreover, equality holds if and only if
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are multivariate normal random variables with proportional covariance matrices.
== Alternative form of the inequality ==
The entropy power inequality can be rewritten in an equivalent form that does not explicitly depend on the definition of entropy power (see Costa and Cover reference below).
Let
X
{\displaystyle X}
and
Y
{\displaystyle Y}
be independent random variables, as above. Then, let
X
′
{\displaystyle X'}
and
Y
′
{\displaystyle Y'}
be independent random variables with Gaussian distributions such that
h
(
X
′
)
=
h
(
X
)
{\displaystyle h(X')=h(X)}
and
h
(
Y
′
)
=
h
(
Y
)
{\displaystyle h(Y')=h(Y)}
Then,
h
(
X
+
Y
)
≥
h
(
X
′
+
Y
′
)
{\displaystyle h(X+Y)\geq h(X'+Y')}
== See also ==
Information entropy
Information theory
Limiting density of discrete points
Self-information
Kullback–Leibler divergence
Entropy estimation
== References ==
Dembo, Amir; Cover, Thomas M.; Thomas, Joy A. (1991). "Information-theoretic inequalities". IEEE Trans. Inf. Theory. 37 (6): 1501–1518. doi:10.1109/18.104312. MR 1134291. S2CID 845669.
Costa, Max H. M.; Cover, Thomas M. (1984). "On the similarity of the entropy-power inequality and the Brunn-Minkowski inequality". IEEE Trans. Inf. Theory. 30 (6): 837–839. doi:10.1109/TIT.1984.1056983.
Gardner, Richard J. (2002). "The Brunn–Minkowski inequality". Bull. Amer. Math. Soc. (N.S.). 39 (3): 355–405 (electronic). doi:10.1090/S0273-0979-02-00941-2.
Shannon, Claude E. (1948). "A mathematical theory of communication". Bell System Tech. J. 27 (3): 379–423, 623–656. doi:10.1002/j.1538-7305.1948.tb01338.x. hdl:10338.dmlcz/101429.
Stam, A. J. (1959). "Some inequalities satisfied by the quantities of information of Fisher and Shannon". Information and Control. 2 (2): 101–112. doi:10.1016/S0019-9958(59)90348-1. | Wikipedia/Entropy_power_inequality |
Jump diffusion is a stochastic process that involves jumps and diffusion. It has important applications in magnetic reconnection, coronal mass ejections, condensed matter physics, and pattern theory and computational vision.
== In physics ==
In crystals, atomic diffusion typically consists of jumps between vacant lattice sites. On time and length scales that average over many single jumps, the net motion of the jumping atoms can be described as regular diffusion.
Jump diffusion can be studied on a microscopic scale by inelastic neutron scattering and by Mößbauer spectroscopy. Closed expressions for the autocorrelation function have been derived for several jump(-diffusion) models:
Singwi, Sjölander 1960: alternation between oscillatory motion and directed motion
Chudley, Elliott 1961: jumps on a lattice
Sears 1966, 1967: jump diffusion of rotational degrees of freedom
Hall, Ross 1981: jump diffusion within a restricted volume
== In economics and finance ==
A jump-diffusion model is a form of mixture model, mixing a jump process and a diffusion process. In finance, jump-diffusion models were first introduced by Robert C. Merton. Such models have a range of financial applications from option pricing, to credit risk, to time series forecasting.
== In pattern theory, computer vision, and medical imaging ==
In pattern theory and computational vision in medical imaging, jump-diffusion processes were first introduced by Grenander and Miller
as a form of random sampling algorithm that mixes "focus"-like motions, the diffusion processes, with saccade-like motions, via jump processes.
The approach modelled sciences of electron-micrographs as containing multiple shapes, each having some fixed dimensional representation, with the collection of micrographs filling out the sample space corresponding to the unions of multiple finite-dimensional spaces.
Using techniques from pattern theory, a posterior probability model was constructed over the countable union of sample space; this is therefore a hybrid system model, containing the discrete notions of object number along with the continuum notions of shape.
The jump-diffusion process was constructed to have ergodic properties so that after initially flowing away from its initial condition it would generate samples from the posterior probability model.
== See also ==
Jump process, an example of jump diffusion
Piecewise-deterministic Markov process (PDMP), an example of jump diffusion and a generalization of the jump process
Hybrid system (in the context of dynamical systems), a generalization of jump diffusion
== References == | Wikipedia/Jump-diffusion_model |
The leftover hash lemma is a lemma in cryptography first stated by Russell Impagliazzo, Leonid Levin, and Michael Luby.
Given a secret key X that has n uniform random bits, of which an adversary was able to learn the values of some t < n bits of that key, the leftover hash lemma states that it is possible to produce a key of about n − t bits, over which the adversary has almost no knowledge, without knowing which t are known to the adversary. Since the adversary knows all but n − t bits, this is almost optimal.
More precisely, the leftover hash lemma states that it is possible to extract a length asymptotic to
H
∞
(
X
)
{\displaystyle H_{\infty }(X)}
(the min-entropy of X) bits from a random variable X) that are almost uniformly distributed. In other words, an adversary who has some partial knowledge about X, will have almost no knowledge about the extracted value. This is also known as privacy amplification (see privacy amplification section in the article Quantum key distribution).
Randomness extractors achieve the same result, but use (normally) less randomness.
Let X be a random variable over
X
{\displaystyle {\mathcal {X}}}
and let
m
>
0
{\displaystyle m>0}
. Let
h
:
S
×
X
→
{
0
,
1
}
m
{\textstyle h\colon {\mathcal {S}}\times {\mathcal {X}}\rightarrow \{0,\,1\}^{m}}
be a 2-universal hash function. If
m
≤
H
∞
(
X
)
−
2
log
(
1
ε
)
{\textstyle m\leq H_{\infty }(X)-2\log \left({\frac {1}{\varepsilon }}\right)}
then for S uniform over
S
{\displaystyle {\mathcal {S}}}
and independent of X, we have:
δ
[
(
h
(
S
,
X
)
,
S
)
,
(
U
,
S
)
]
≤
ε
.
{\textstyle \delta \left[(h(S,X),S),(U,S)\right]\leq \varepsilon .}
where U is uniform over
{
0
,
1
}
m
{\displaystyle \{0,1\}^{m}}
and independent of S.
H
∞
(
X
)
=
−
log
max
x
Pr
[
X
=
x
]
{\textstyle H_{\infty }(X)=-\log \max _{x}\Pr[X=x]}
is the min-entropy of X, which measures the amount of randomness X has. The min-entropy is always less than or equal to the Shannon entropy. Note that
max
x
Pr
[
X
=
x
]
{\textstyle \max _{x}\Pr[X=x]}
is the probability of correctly guessing X. (The best guess is to guess the most probable value.) Therefore, the min-entropy measures how difficult it is to guess X.
0
≤
δ
(
X
,
Y
)
=
1
2
∑
v
|
Pr
[
X
=
v
]
−
Pr
[
Y
=
v
]
|
≤
1
{\textstyle 0\leq \delta (X,Y)={\frac {1}{2}}\sum _{v}\left|\Pr[X=v]-\Pr[Y=v]\right|\leq 1}
is a statistical distance between X and Y.
== See also ==
Universal hashing
Min-entropy
Rényi entropy
Information-theoretic security
== References ==
C. H. Bennett, G. Brassard, and J. M. Robert. Privacy amplification by public discussion. SIAM Journal on Computing, 17(2):210-229, 1988.
C. Bennett, G. Brassard, C. Crepeau, and U. Maurer. Generalized privacy amplification. IEEE Transactions on Information Theory, 41, 1995.
J. Håstad, R. Impagliazzo, L. A. Levin and M. Luby. A Pseudorandom Generator from any One-way Function. SIAM Journal on Computing, v28 n4, pp. 1364-1396, 1999. | Wikipedia/Leftover_hash_lemma |
The Box–Muller transform, by George Edward Pelham Box and Mervin Edgar Muller, is a random number sampling method for generating pairs of independent, standard, normally distributed (zero expectation, unit variance) random numbers, given a source of uniformly distributed random numbers. The method was first mentioned explicitly by Raymond E. A. C. Paley and Norbert Wiener in their 1934 treatise on Fourier transforms in the complex domain. Given the status of these latter authors and the widespread availability and use of their treatise, it is almost certain that Box and Muller were well aware of its contents.
The Box–Muller transform is commonly expressed in two forms. The basic form as given by Box and Muller takes two samples from the uniform distribution on the interval (0,1) and maps them to two standard, normally distributed samples. The polar form takes two samples from a different interval, [−1,+1], and maps them to two normally distributed samples without the use of sine or cosine functions.
The Box–Muller transform was developed as a more computationally efficient alternative to the inverse transform sampling method. The ziggurat algorithm gives a more efficient method for scalar processors (e.g. old CPUs), while the Box–Muller transform is superior for processors with vector units (e.g. GPUs or modern CPUs).
== Basic form ==
Suppose U1 and U2 are independent samples chosen from the uniform distribution on the unit interval (0, 1). Let
Z
0
=
R
cos
(
Θ
)
=
−
2
ln
U
1
cos
(
2
π
U
2
)
{\displaystyle Z_{0}=R\cos(\Theta )={\sqrt {-2\ln U_{1}}}\cos(2\pi U_{2})\,}
and
Z
1
=
R
sin
(
Θ
)
=
−
2
ln
U
1
sin
(
2
π
U
2
)
.
{\displaystyle Z_{1}=R\sin(\Theta )={\sqrt {-2\ln U_{1}}}\sin(2\pi U_{2}).\,}
Then Z0 and Z1 are independent random variables with a standard normal distribution.
The derivation is based on a property of a two-dimensional Cartesian system, where X and Y coordinates are described by two independent and normally distributed random variables, the random variables for R2 and Θ (shown above) in the corresponding polar coordinates are also independent and can be expressed as
R
2
=
−
2
⋅
ln
U
1
{\displaystyle R^{2}=-2\cdot \ln U_{1}\,}
and
Θ
=
2
π
U
2
.
{\displaystyle \Theta =2\pi U_{2}.\,}
Because R2 is the square of the norm of the standard bivariate normal variable (X, Y), it has the chi-squared distribution with two degrees of freedom. In the special case of two degrees of freedom, the chi-squared distribution coincides with the exponential distribution, and the equation for R2 above is a simple way of generating the required exponential variate.
== Polar form ==
The polar form was first proposed by J. Bell and then modified by R. Knop. While several different versions of the polar method have been described, the version of R. Knop will be described here because it is the most widely used, in part due to its inclusion in Numerical Recipes. A slightly different form is described as "Algorithm P" by D. Knuth in The Art of Computer Programming.
Given u and v, independent and uniformly distributed in the closed interval [−1, +1], set s = R2 = u2 + v2. If s = 0 or s ≥ 1, discard u and v, and try another pair (u, v). Because u and v are uniformly distributed and because only points within the unit circle have been admitted, the values of s will be uniformly distributed in the open interval (0, 1), too. The latter can be seen by calculating the cumulative distribution function for s in the interval (0, 1). This is the area of a circle with radius
s
{\textstyle {\sqrt {s}}}
, divided by
π
{\displaystyle \pi }
. From this we find the probability density function to have the constant value 1 on the interval (0, 1). Equally so, the angle θ divided by
2
π
{\displaystyle 2\pi }
is uniformly distributed in the interval [0, 1) and independent of s.
We now identify the value of s with that of U1 and
θ
/
(
2
π
)
{\displaystyle \theta /(2\pi )}
with that of U2 in the basic form. As shown in the figure, the values of
cos
θ
=
cos
2
π
U
2
{\displaystyle \cos \theta =\cos 2\pi U_{2}}
and
sin
θ
=
sin
2
π
U
2
{\displaystyle \sin \theta =\sin 2\pi U_{2}}
in the basic form can be replaced with the ratios
cos
θ
=
u
/
R
=
u
/
s
{\displaystyle \cos \theta =u/R=u/{\sqrt {s}}}
and
sin
θ
=
v
/
R
=
v
/
s
{\textstyle \sin \theta =v/R=v/{\sqrt {s}}}
, respectively. The advantage is that calculating the trigonometric functions directly can be avoided. This is helpful when trigonometric functions are more expensive to compute than the single division that replaces each one.
Just as the basic form produces two standard normal deviates, so does this alternate calculation.
z
0
=
−
2
ln
U
1
cos
(
2
π
U
2
)
=
−
2
ln
s
(
u
s
)
=
u
⋅
−
2
ln
s
s
{\displaystyle z_{0}={\sqrt {-2\ln U_{1}}}\cos(2\pi U_{2})={\sqrt {-2\ln s}}\left({\frac {u}{\sqrt {s}}}\right)=u\cdot {\sqrt {\frac {-2\ln s}{s}}}}
and
z
1
=
−
2
ln
U
1
sin
(
2
π
U
2
)
=
−
2
ln
s
(
v
s
)
=
v
⋅
−
2
ln
s
s
.
{\displaystyle z_{1}={\sqrt {-2\ln U_{1}}}\sin(2\pi U_{2})={\sqrt {-2\ln s}}\left({\frac {v}{\sqrt {s}}}\right)=v\cdot {\sqrt {\frac {-2\ln s}{s}}}.}
== Contrasting the two forms ==
The polar method differs from the basic method in that it is a type of rejection sampling. It discards some generated random numbers, but can be faster than the basic method because it is simpler to compute (provided that the random number generator is relatively fast) and is more numerically robust. Avoiding the use of expensive trigonometric functions improves speed over the basic form. It discards 1 − π/4 ≈ 21.46% of the total input uniformly distributed random number pairs generated, i.e. discards 4/π − 1 ≈ 27.32% uniformly distributed random number pairs per Gaussian random number pair generated, requiring 4/π ≈ 1.2732 input random numbers per output random number.
The basic form requires two multiplications, 1/2 logarithm, 1/2 square root, and one trigonometric function for each normal variate. On some processors, the cosine and sine of the same argument can be calculated in parallel using a single instruction. Notably for Intel-based machines, one can use the fsincos assembler instruction or the expi instruction (usually available from C as an intrinsic function), to calculate complex
exp
(
i
z
)
=
e
i
z
=
cos
z
+
i
sin
z
,
{\displaystyle \exp(iz)=e^{iz}=\cos z+i\sin z,\,}
and just separate the real and imaginary parts.
Note:
To explicitly calculate the complex-polar form use the following substitutions in the general form,
Let
r
=
−
ln
(
u
1
)
{\textstyle r={\sqrt {-\ln(u_{1})}}}
and
z
=
2
π
u
2
.
{\textstyle z=2\pi u_{2}.}
Then
r
e
i
z
=
−
ln
(
u
1
)
e
i
2
π
u
2
=
−
2
ln
(
u
1
)
[
cos
(
2
π
u
2
)
+
i
sin
(
2
π
u
2
)
]
.
{\displaystyle re^{iz}={\sqrt {-\ln(u_{1})}}e^{i2\pi u_{2}}={\sqrt {-2\ln(u_{1})}}\left[\cos(2\pi u_{2})+i\sin(2\pi u_{2})\right].}
The polar form requires 3/2 multiplications, 1/2 logarithm, 1/2 square root, and 1/2 division for each normal variate. The effect is to replace one multiplication and one trigonometric function with a single division and a conditional loop.
== Tails truncation ==
When a computer is used to produce a uniform random variable it will inevitably have some inaccuracies because there is a lower bound on how close numbers can be to 0. If the generator uses 32 bits per output value, the smallest non-zero number that can be generated is
2
−
32
{\displaystyle 2^{-32}}
. When
U
1
{\displaystyle U_{1}}
and
U
2
{\displaystyle U_{2}}
are equal to this the Box–Muller transform produces a normal random deviate equal to
δ
=
−
2
ln
(
2
−
32
)
cos
(
2
π
2
−
32
)
≈
6.660
{\textstyle \delta ={\sqrt {-2\ln(2^{-32})}}\cos(2\pi 2^{-32})\approx 6.660}
. This means that the algorithm will not produce random variables more than 6.660 standard deviations from the mean. This corresponds to a proportion of
2
(
1
−
Φ
(
δ
)
)
≃
2.738
×
10
−
11
{\displaystyle 2(1-\Phi (\delta ))\simeq 2.738\times 10^{-11}}
lost due to the truncation, where
Φ
(
δ
)
{\displaystyle \Phi (\delta )}
is the standard cumulative normal distribution. With 64 bits the limit is pushed to
δ
=
9.419
{\displaystyle \delta =9.419}
standard deviations, for which
2
(
1
−
Φ
(
δ
)
)
<
5
×
10
−
21
{\displaystyle 2(1-\Phi (\delta ))<5\times 10^{-21}}
.
== Implementation ==
=== C++ ===
The standard Box–Muller transform generates values from the standard normal distribution (i.e. standard normal deviates) with mean 0 and standard deviation 1. The implementation below in standard C++ generates values from any normal distribution with mean
μ
{\displaystyle \mu }
and variance
σ
2
{\displaystyle \sigma ^{2}}
. If
Z
{\displaystyle Z}
is a standard normal deviate, then
X
=
Z
σ
+
μ
{\displaystyle X=Z\sigma +\mu }
will have a normal distribution with mean
μ
{\displaystyle \mu }
and standard deviation
σ
{\displaystyle \sigma }
. The random number generator has been seeded to ensure that new, pseudo-random values will be returned from sequential calls to the generateGaussianNoise function.
=== JavaScript ===
=== Julia ===
== See also ==
Inverse transform sampling
Marsaglia polar method, similar transform to Box–Muller, which uses Cartesian coordinates, instead of polar coordinates
== References ==
== External links ==
Weisstein, Eric W. "Box-Muller Transformation". MathWorld.
How to Convert a Uniform Distribution to a Gaussian Distribution (C Code) | Wikipedia/Box–Muller_transform |
In queueing theory, a discipline within the mathematical theory of probability, an M/M/1 queue represents the queue length in a system having a single server, where arrivals are determined by a Poisson process and job service times have an exponential distribution. The model name is written in Kendall's notation. The model is the most elementary of queueing models and an attractive object of study as closed-form expressions can be obtained for many metrics of interest in this model. An extension of this model with more than one server is the M/M/c queue.
== Model definition ==
An M/M/1 queue is a stochastic process whose state space is the set {0,1,2,3,...} where the value corresponds to the number of customers in the system, including any currently in service.
Arrivals occur at rate λ according to a Poisson process and move the process from state i to i + 1.
Service times have an exponential distribution with rate parameter μ in the M/M/1 queue, where 1/μ is the mean service time.
All arrival times and services times are (usually) assumed to be independent of one another.
A single server serves customers one at a time from the front of the queue, according to a first-come, first-served discipline. When the service is complete the customer leaves the queue and the number of customers in the system reduces by one.
The buffer is of infinite size, so there is no limit on the number of customers it can contain.
The model can be described as a continuous time Markov chain with transition rate matrix
Q
=
(
−
λ
λ
μ
−
(
μ
+
λ
)
λ
μ
−
(
μ
+
λ
)
λ
μ
−
(
μ
+
λ
)
λ
⋱
)
{\displaystyle Q={\begin{pmatrix}-\lambda &\lambda \\\mu &-(\mu +\lambda )&\lambda \\&\mu &-(\mu +\lambda )&\lambda \\&&\mu &-(\mu +\lambda )&\lambda &\\&&&&\ddots \end{pmatrix}}}
on the state space {0,1,2,3,...}. This is the same continuous time Markov chain as in a birth–death process. The state space diagram for this chain is as below.
== Stationary analysis ==
The model is considered stable only if λ < μ. If, on average, arrivals happen faster than service completions the queue will grow indefinitely long and the system will not have a stationary distribution. The stationary distribution is the limiting distribution for large values of t.
Various performance measures can be computed explicitly for the M/M/1 queue. We write ρ = λ/μ for the utilization of the buffer and require ρ < 1 for the queue to be stable. ρ represents the average proportion of time which the server is occupied.
The probability that the stationary process is in state i (contains i customers, including those in service) is: 172–173
π
i
=
(
1
−
ρ
)
ρ
i
.
{\displaystyle \pi _{i}=(1-\rho )\rho ^{i}.\,}
=== Average number of customers in the system ===
We see that the number of customers in the system is geometrically distributed with parameter 1 − ρ. Thus the average number of customers in the system is ρ/(1 − ρ) and the variance of number of customers in the system is ρ/(1 − ρ)2. This result holds for any work conserving service regime, such as processor sharing.
=== Busy period of server ===
The busy period is the time period measured between the instant a customer arrives to an empty system until the instant a customer departs leaving behind an empty system. The busy period has probability density function
f
(
t
)
=
{
1
t
ρ
e
−
(
λ
+
μ
)
t
I
1
(
2
t
λ
μ
)
t
>
0
0
otherwise
{\displaystyle f(t)={\begin{cases}{\frac {1}{t{\sqrt {\rho }}}}e^{-(\lambda +\mu )t}I_{1}(2t{\sqrt {\lambda \mu }})&t>0\\0&{\text{otherwise}}\end{cases}}}
where I1 is a modified Bessel function of the first kind, obtained by using Laplace transforms and inverting the solution.
The Laplace transform of the M/M/1 busy period is given by: 215
E
(
e
−
s
F
)
=
1
2
λ
(
λ
+
μ
+
s
−
(
λ
+
μ
+
s
)
2
−
4
λ
μ
)
{\displaystyle \mathbb {E} (e^{-sF})={\frac {1}{2\lambda }}(\lambda +\mu +s-{\sqrt {(\lambda +\mu +s)^{2}-4\lambda \mu }})}
which gives the moments of the busy period, in particular the mean is 1/(μ − λ) and variance is given by
1
μ
2
(
1
−
ρ
)
2
.
{\displaystyle {\frac {1}{\mu ^{2}(1-\rho )^{2}}}.}
=== Response time ===
The average response time or sojourn time (total time a customer spends in the system) does not depend on scheduling discipline and can be computed using Little's law as 1/(μ − λ). The average time spent waiting is 1/(μ − λ) − 1/μ = ρ/(μ − λ). The distribution of response times experienced does depend on scheduling discipline.
==== First-come, first-served discipline ====
For customers who arrive and find the queue as a stationary process, the response time they experience (the sum of both waiting time and service time) has Laplace transform
(μ − λ)/(s + μ − λ) and therefore probability density function
f
(
t
)
=
{
(
μ
−
λ
)
e
−
(
μ
−
λ
)
t
t
>
0
0
otherwise.
{\displaystyle f(t)={\begin{cases}(\mu -\lambda )e^{-(\mu -\lambda )t}&t>0\\0&{\text{otherwise.}}\end{cases}}}
==== Processor sharing discipline ====
In an M/M/1-PS queue there is no waiting line and all jobs receive an equal proportion of the service capacity. Suppose the single server serves at rate 16 and there are 4 jobs in the system, each job will experience service at rate 4. The rate at which jobs receive service changes each time a job arrives at or departs from the system.
For customers who arrive to find the queue as a stationary process, the Laplace transform of the distribution of response times experienced by customers was published in 1970, for which an integral representation is known. The waiting time distribution (response time less service time) for a customer requiring x amount of service has transform: 356
W
∗
(
s
|
x
)
=
(
1
−
ρ
)
(
1
−
ρ
r
2
)
e
−
[
λ
(
1
−
r
)
+
s
]
x
(
1
−
ρ
r
2
)
−
ρ
(
1
−
r
)
2
e
−
(
μ
/
r
−
λ
r
)
x
{\displaystyle W^{\ast }(s|x)={\frac {(1-\rho )(1-\rho r^{2})e^{-[\lambda (1-r)+s]x}}{(1-\rho r^{2})-\rho (1-r)^{2}e^{-(\mu /r-\lambda r)x}}}}
where r is the smaller root of the equation
λ
r
2
−
(
λ
+
μ
+
s
)
r
+
μ
=
0.
{\displaystyle \lambda r^{2}-(\lambda +\mu +s)r+\mu =0.}
The mean response time for a job arriving and requiring amount x of service can therefore be computed as x μ/(μ − λ). An alternative approach computes the same results using a spectral expansion method.
== Transient solution ==
We can write a probability mass function dependent on t to describe the probability that the M/M/1 queue is in a particular state at a given time. We assume that the queue is initially in state i and write pk(t) for the probability of being in state k at time t. Then
p
k
(
t
)
=
e
−
(
λ
+
μ
)
t
[
ρ
k
−
i
2
I
k
−
i
(
a
t
)
+
ρ
k
−
i
−
1
2
I
k
+
i
+
1
(
a
t
)
+
(
1
−
ρ
)
ρ
k
∑
j
=
k
+
i
+
2
∞
ρ
−
j
/
2
I
j
(
a
t
)
]
{\displaystyle p_{k}(t)=e^{-(\lambda +\mu )t}\left[\rho ^{\frac {k-i}{2}}I_{k-i}(at)+\rho ^{\frac {k-i-1}{2}}I_{k+i+1}(at)+(1-\rho )\rho ^{k}\sum _{j=k+i+2}^{\infty }\rho ^{-j/2}I_{j}(at)\right]}
where
i
{\displaystyle i}
is the initial number of customers in the station at time
t
=
0
{\displaystyle t=0}
,
ρ
=
λ
/
μ
{\displaystyle \rho =\lambda /\mu }
,
a
=
2
λ
μ
{\displaystyle a=2{\sqrt {\lambda \mu }}}
and
I
k
{\displaystyle I_{k}}
is the modified Bessel function of the first kind. Moments for the transient solution can be expressed as the sum of two monotone functions.
== Diffusion approximation ==
When the utilization ρ is close to 1 the process can be approximated by a reflected Brownian motion with drift parameter λ – μ and variance parameter λ + μ. This heavy traffic limit was first introduced by John Kingman.
== References == | Wikipedia/M/M/1_model |
In probability theory, two sequences of probability measures are said to be contiguous if asymptotically they share the same support. Thus the notion of contiguity extends the concept of absolute continuity to the sequences of measures.
The concept was originally introduced by Le Cam (1960) as part of his foundational contribution to the development of asymptotic theory in mathematical statistics. He is best known for the general concepts of local asymptotic normality and contiguity.
== Definition ==
Let
(
Ω
n
,
F
n
)
{\displaystyle (\Omega _{n},{\mathcal {F}}_{n})}
be a sequence of measurable spaces, each equipped with two measures Pn and Qn.
We say that Qn is contiguous with respect to Pn (denoted Qn ◁ Pn) if for every sequence An of measurable sets, Pn(An) → 0 implies Qn(An) → 0.
The sequences Pn and Qn are said to be mutually contiguous or bi-contiguous (denoted Qn ◁▷ Pn) if both Qn is contiguous with respect to Pn and Pn is contiguous with respect to Qn.
The notion of contiguity is closely related to that of absolute continuity. We say that a measure Q is absolutely continuous with respect to P (denoted Q ≪ P) if for any measurable set A, P(A) = 0 implies Q(A) = 0. That is, Q is absolutely continuous with respect to P if the support of Q is a subset of the support of P, except in cases where this is false, including, e.g., a measure that concentrates on an open set, because its support is a closed set and it assigns measure zero to the boundary, and so another measure may concentrate on the boundary and thus have support contained within the support of the first measure, but they will be mutually singular. In summary, this previous sentence's statement of absolute continuity is false. The contiguity property replaces this requirement with an asymptotic one: Qn is contiguous with respect to Pn if the "limiting support" of Qn is a subset of the limiting support of Pn. By the aforementioned logic, this statement is also false.
It is possible however that each of the measures Qn be absolutely continuous with respect to Pn, while the sequence Qn not being contiguous with respect to Pn.
The fundamental Radon–Nikodym theorem for absolutely continuous measures states that if Q is absolutely continuous with respect to P, then Q has density with respect to P, denoted as ƒ = dQ⁄dP, such that for any measurable set A
Q
(
A
)
=
∫
A
f
d
P
,
{\displaystyle Q(A)=\int _{A}f\,\mathrm {d} P,\,}
which is interpreted as being able to "reconstruct" the measure Q from knowing the measure P and the derivative ƒ. A similar result exists for contiguous sequences of measures, and is given by the Le Cam's third lemma.
== Properties ==
For the case
(
P
n
,
Q
n
)
=
(
P
,
Q
)
{\displaystyle (P_{n},Q_{n})=(P,Q)}
for all n it applies
Q
n
◃
P
n
⇔
Q
≪
P
{\displaystyle Q_{n}\triangleleft P_{n}\Leftrightarrow Q\ll P}
.
It is possible that
P
n
≪
Q
n
{\displaystyle P_{n}\ll Q_{n}}
is true for all n without
P
n
◃
Q
n
{\displaystyle P_{n}\triangleleft Q_{n}}
.
== Le Cam's first lemma ==
For two sequences of measures
(
P
n
)
and
(
Q
n
)
{\displaystyle (P_{n}){\text{ and }}(Q_{n})}
on measurable spaces
(
Ω
n
,
F
n
)
{\displaystyle (\Omega _{n},{\mathcal {F}}_{n})}
the following statements are equivalent:
P
n
◃
Q
n
{\displaystyle P_{n}\triangleleft Q_{n}}
d
Q
n
d
P
n
⟶
P
n
U
along a subsequence
⇒
P
(
U
>
0
)
=
1
{\displaystyle {\frac {\mathrm {d} Q_{n}}{\mathrm {d} P_{n}}}{\overset {P_{n}}{\,\longrightarrow \,}}U{\text{ along a subsequence }}\Rightarrow P(U>0)=1}
d
P
n
d
Q
n
⟶
Q
n
V
along a subsequence
⇒
E
(
V
)
=
1
{\displaystyle {\frac {\mathrm {d} P_{n}}{\mathrm {d} Q_{n}}}{\overset {Q_{n}}{\,\longrightarrow \,}}V{\text{ along a subsequence }}\Rightarrow E(V)=1}
T
n
⟶
P
n
0
⇒
T
n
⟶
Q
n
0
{\displaystyle T_{n}{\overset {P_{n}}{\,\longrightarrow \,}}0\,\Rightarrow \,T_{n}{\overset {Q_{n}}{\,\longrightarrow \,}}0}
for any statistics
T
n
:
Ω
n
→
R
{\displaystyle T_{n}:\Omega _{n}\rightarrow \mathbb {R} }
.
where
U
{\displaystyle U}
and
V
{\displaystyle V}
are random variables on
(
Ω
,
F
,
P
)
{\displaystyle (\Omega ,{\mathcal {F}},P)}
and
(
Ω
′
,
F
′
,
Q
)
{\displaystyle (\Omega ',{\mathcal {F}}',Q)}
.
=== Interpretation ===
Prohorov's theorem tells us that given a sequence of probability measures, every subsequence has a further subsequence which converges weakly. Le Cam's first lemma shows that the properties of the associated limit points determine whether contiguity applies or not. This can be understood in analogy with the non-asymptotic notion of absolute continuity of measures.
== Applications ==
Econometrics
== See also ==
Asymptotic theory (statistics)
Contiguity (disambiguation)
Probability space
== Notes ==
== References ==
== Additional literature ==
Roussas, George G. (1972), Contiguity of Probability Measures: Some Applications in Statistics, CUP, ISBN 978-0-521-09095-7.
Scott, D.J. (1982) Contiguity of Probability Measures, Australian & New Zealand Journal of Statistics, 24 (1), 80–88.
== External links ==
Contiguity Asymptopia: 17 October 2000, David Pollard
Asymptotic normality under contiguity in a dependence case
A Central Limit Theorem under Contiguous Alternatives
Superefficiency, Contiguity, LAN, Regularity, Convolution Theorems
Testing statistical hypotheses
Necessary and sufficient conditions for contiguity and entire asymptotic separation of probability measures R Sh Liptser et al 1982 Russ. Math. Surv. 37 107–136
The unconscious as infinite sets By Ignacio Matte Blanco, Eric (FRW) Rayner
"Contiguity of Probability Measures", David J. Scott, La Trobe University
"On the Concept of Contiguity", Hall, Loynes | Wikipedia/Contiguity_(probability_theory) |
The Barabási–Albert (BA) model is an algorithm for generating random scale-free networks using a preferential attachment mechanism. Several natural and human-made systems, including the Internet, the World Wide Web, citation networks, and some social networks are thought to be approximately scale-free and certainly contain few nodes (called hubs) with unusually high degree as compared to the other nodes of the network. The BA model tries to explain the existence of such nodes in real networks. The algorithm is named for its inventors Albert-László Barabási and Réka Albert.
== Concepts ==
Many observed networks (at least approximately) fall into the class of scale-free networks, meaning that they have power-law (or scale-free) degree distributions, while random graph models such as the Erdős–Rényi (ER) model and the Watts–Strogatz (WS) model do not exhibit power laws. The Barabási–Albert model is one of several proposed models that generate scale-free networks. It incorporates two important general concepts: growth and preferential attachment. Both growth and preferential attachment exist widely in real networks.
Growth means that the number of nodes in the network increases over time.
Preferential attachment means that the more connected a node is, the more likely it is to receive new links. Nodes with a higher degree have a stronger ability to grab links added to the network. Intuitively, the preferential attachment can be understood if we think in terms of social networks connecting people. Here a link from A to B means that person A "knows" or "is acquainted with" person B. Heavily linked nodes represent well-known people with lots of relations. When a newcomer enters the community, they are more likely to become acquainted with one of those more visible people rather than with a relative unknown. The BA model was proposed by assuming that in the World Wide Web, new pages link preferentially to hubs, i.e. very well known sites such as Google, rather than to pages that hardly anyone knows. If someone selects a new page to link to by randomly choosing an existing link, the probability of selecting a particular page would be proportional to its degree. The BA model claims that this explains the preferential attachment probability rule.
Later, the Bianconi–Barabási model works to address this issue by introducing a "fitness" parameter.
Preferential attachment is an example of a positive feedback cycle where initially random variations (one node initially having more links or having started accumulating links earlier than another) are automatically reinforced, thus greatly magnifying differences. This is also sometimes called the Matthew effect, "the rich get richer". See also autocatalysis.
== Algorithm ==
The only parameter in the BA model is
m
{\displaystyle m}
, a positive integer. The network initializes with a network of
m
0
≥
m
{\displaystyle m_{0}\geq m}
nodes.
At each step, add one new node, then sample
m
{\displaystyle m}
neighbors among the existing vertices from the network, with a probability that is proportional to the number of links that the existing nodes already have (The original papers did not specify how to handle cases where the same existing node is chosen multiple times.). Formally, the probability
p
i
{\displaystyle p_{i}}
that the new node is connected to node
i
{\displaystyle i}
is
p
i
=
k
i
∑
j
k
j
,
{\displaystyle p_{i}={\frac {k_{i}}{\sum _{j}k_{j}}},}
where
k
i
{\displaystyle k_{i}}
is the degree of node
i
{\displaystyle i}
and the sum is made over all pre-existing nodes
j
{\displaystyle j}
(i.e. the denominator results in twice the current number of edges in the network). This step can be performed by first uniformly sampling one edge, then sampling one of the two vertices on the edge.
Heavily linked nodes ("hubs") tend to quickly accumulate even more links, while nodes with only a few links are unlikely to be chosen as the destination for a new link. The new nodes have a "preference" to attach themselves to the already heavily linked nodes.
== Properties ==
=== Degree distribution ===
The degree distribution resulting from the BA model is scale free, in particular, it is a power law of the form
P
(
k
)
∼
k
−
3
{\displaystyle P(k)\sim k^{-3}\,}
=== Hirsch index distribution ===
The h-index or Hirsch index distribution was shown to also be scale free and was proposed as the lobby index, to be used as a centrality measure
H
(
k
)
∼
k
−
6
{\displaystyle H(k)\sim k^{-6}\,}
Furthermore, an analytic result for the density of nodes with h-index 1 can be obtained in the case where
m
0
=
1
{\displaystyle m_{0}=1}
H
(
1
)
|
m
0
=
1
=
4
−
π
{\displaystyle H(1){\Big |}_{m_{0}=1}=4-\pi \,}
=== Node degree correlations ===
Correlations between the degrees of connected nodes develop spontaneously in the BA model because of the way the network evolves. The probability,
n
k
ℓ
{\displaystyle n_{k\ell }}
, of finding a link that connects a node of degree
k
{\displaystyle k}
to an ancestor node of degree
ℓ
{\displaystyle \ell }
in the BA model for the special case of
m
=
1
{\displaystyle m=1}
(BA tree) is given by
n
k
ℓ
=
4
(
ℓ
−
1
)
k
(
k
+
1
)
(
k
+
ℓ
)
(
k
+
ℓ
+
1
)
(
k
+
ℓ
+
2
)
+
12
(
ℓ
−
1
)
k
(
k
+
ℓ
−
1
)
(
k
+
ℓ
)
(
k
+
ℓ
+
1
)
(
k
+
ℓ
+
2
)
.
{\displaystyle n_{k\ell }={\frac {4\left(\ell -1\right)}{k\left(k+1\right)\left(k+\ell \right)\left(k+\ell +1\right)\left(k+\ell +2\right)}}+{\frac {12\left(\ell -1\right)}{k\left(k+\ell -1\right)\left(k+\ell \right)\left(k+\ell +1\right)\left(k+\ell +2\right)}}.}
This confirms the existence of degree correlations, because if the distributions were uncorrelated, we would get
n
k
ℓ
=
k
−
3
ℓ
−
3
{\displaystyle n_{k\ell }=k^{-3}\ell ^{-3}}
.
For general
m
{\displaystyle m}
, the fraction of links who connect a node of degree
k
{\displaystyle k}
to a node of degree
ℓ
{\displaystyle \ell }
is
p
(
k
,
ℓ
)
=
2
m
(
m
+
1
)
k
(
k
+
1
)
ℓ
(
ℓ
+
1
)
[
1
−
(
2
m
+
2
m
+
1
)
(
k
+
ℓ
−
2
m
ℓ
−
m
)
(
k
+
ℓ
+
2
ℓ
+
1
)
]
.
{\displaystyle p(k,\ell )={\frac {2m(m+1)}{k(k+1)\ell (\ell +1)}}\left[1-{\frac {{\binom {2m+2}{m+1}}{\binom {k+\ell -2m}{\ell -m}}}{\binom {k+\ell +2}{\ell +1}}}\right].}
Also, the nearest-neighbor degree distribution
p
(
ℓ
∣
k
)
{\displaystyle p(\ell \mid k)}
, that is, the degree distribution of the neighbors of a node with degree
k
{\displaystyle k}
, is given by
p
(
ℓ
∣
k
)
=
m
(
k
+
2
)
k
ℓ
(
ℓ
+
1
)
[
1
−
(
2
m
+
2
m
+
1
)
(
k
+
ℓ
−
2
m
ℓ
−
m
)
(
k
+
ℓ
+
2
ℓ
+
1
)
]
.
{\displaystyle p(\ell \mid k)={\frac {m(k+2)}{k\ell (\ell +1)}}\left[1-{\frac {{\binom {2m+2}{m+1}}{\binom {k+\ell -2m}{\ell -m}}}{\binom {k+\ell +2}{\ell +1}}}\right].}
In other words, if we select a node with degree
k
{\displaystyle k}
, and then select one of its neighbors randomly, the probability that this randomly selected neighbor will have degree
ℓ
{\displaystyle \ell }
is given by the expression
p
(
ℓ
|
k
)
{\displaystyle p(\ell |k)}
above.
=== Clustering coefficient ===
The case for
m
=
1
{\displaystyle m=1}
is trivial: networks are trees and the clustering coefficient is equal to zero.
An analytical result for the clustering coefficient of the BA model was obtained by Klemm and Eguíluz and proven by Bollobás. A mean-field approach to study the clustering coefficient was applied by Fronczak, Fronczak and Holyst.
The average clustering coefficient of the Barabási–Albert model depends on the size of the network N:
⟨
C
⟩
∼
l
n
N
2
/
N
.
{\displaystyle \langle C\rangle \sim lnN^{2}/N.}
This behavior is distinct from the behavior of small-world networks where clustering is independent of system size.
The clustering as a function of node degree
C
(
k
)
{\displaystyle C(k)}
is practically independent of
k
{\displaystyle k}
.
=== Spectral properties ===
The spectral density of BA model has a different shape from the semicircular spectral density of random graph. It has a triangle-like shape with the top lying well above the semicircle and edges decaying as a power law. In (Section 5.1), it was proved that the shape of this spectral density is not an exact triangular function by analyzing the moments of the spectral density as a function of the power-law exponent.
=== Dynamic scaling ===
By definition, the BA model describes a time developing phenomenon and hence, besides its scale-free property, one could also look for its dynamic scaling property.
In the BA network nodes can also be characterized by generalized degree
q
{\displaystyle q}
, the product
of the square root of the birth time of each node and their corresponding degree
k
{\displaystyle k}
, instead
of the degree
k
{\displaystyle k}
alone since the time of birth matters in the BA network. We find that the
generalized degree distribution
F
(
q
,
t
)
{\displaystyle F(q,t)}
has some non-trivial features and exhibits dynamic scaling
F
(
q
,
t
)
∼
t
−
1
/
2
ϕ
(
q
/
t
1
/
2
)
.
{\displaystyle F(q,t)\sim t^{-1/2}\phi (q/t^{1/2}).}
It implies that the distinct plots of
F
(
q
,
t
)
{\displaystyle F(q,t)}
vs
q
{\displaystyle q}
would collapse into a universal curve if we plot
F
(
q
,
t
)
t
1
/
2
{\displaystyle F(q,t)t^{1/2}}
vs
q
/
t
1
/
2
{\displaystyle q/t^{1/2}}
.
== Limiting cases ==
=== Model A ===
Model A retains growth but does not include preferential attachment. The probability of a new node connecting to any pre-existing node is equal. The resulting degree distribution in this limit is geometric, indicating that growth alone is not sufficient to produce a scale-free structure.
=== Model B ===
Model B retains preferential attachment but eliminates growth. The model begins with a fixed number of disconnected nodes and adds links, preferentially choosing high degree nodes as link destinations. Though the degree distribution early in the simulation looks scale-free, the distribution is not stable, and it eventually becomes nearly Gaussian as the network nears saturation. So preferential attachment alone is not sufficient to produce a scale-free structure.
The failure of models A and B to lead to a scale-free distribution indicates that growth and preferential attachment are needed simultaneously to reproduce the stationary power-law distribution observed in real networks.
== Non-linear preferential attachment ==
The BA model can be thought of as a specific case of the more general non-linear preferential attachment (NLPA) model. The NLPA algorithm is identical to the BA model with the attachment probability replaced by the more general form
p
i
=
k
i
α
∑
j
k
j
α
,
{\displaystyle p_{i}={\frac {k_{i}^{\alpha }}{\sum _{j}k_{j}^{\alpha }}},}
where
α
{\displaystyle \alpha }
is a constant positive exponent. If
α
=
1
{\displaystyle \alpha =1}
, NLPA reduces to the BA model and is referred to as "linear". If
0
<
α
<
1
{\displaystyle 0<\alpha <1}
, NLPA is referred to as "sub-linear" and the degree distribution of the network tends to a stretched exponential distribution. If
α
>
1
{\displaystyle \alpha >1}
, NLPA is referred to as "super-linear" and a small number of nodes connect to almost all other nodes in the network. For both
α
<
1
{\displaystyle \alpha <1}
and
α
>
1
{\displaystyle \alpha >1}
, the scale-free property of the network is broken in the limit of infinite system size. However, if
α
{\displaystyle \alpha }
is only slightly larger than
1
{\displaystyle 1}
, NLPA may result in degree distributions which appear to be transiently scale free.
== History ==
Preferential attachment made its first appearance in 1923 in the celebrated urn model of the Hungarian mathematician György Pólya in 1923. The master equation method, which yields a more transparent derivation, was applied to the problem by Herbert A. Simon in 1955 in the course of studies of the sizes of cities and other phenomena. It was first applied to explain citation frequencies by Derek de Solla Price in 1976. Price was interested in the accumulation of citations of scientific papers and the Price model used "cumulative advantage" (his name for preferential attachment) to generate a fat tailed distribution. In the language of modern citations network, Price's model produces a directed network, i.e. the version of the Barabási–Albert model. The name "preferential attachment" and the present popularity of scale-free network models is due to the work of Albert-László Barabási and Réka Albert, who discovered that a similar process is present in real networks, and applied in 1999 preferential attachment to explain the numerically observed degree distributions on the web.
== See also ==
Bianconi–Barabási model
Chinese restaurant process
Complex networks
Erdős–Rényi (ER) model
Price's model
Percolation theory
Scale-free network
Small-world network
Watts and Strogatz model
== References ==
== External links ==
"This Man Could Rule the World"
"A Java Implementation for Barabási–Albert"
"Generating Barabási–Albert Model Graphs in Code" | Wikipedia/BA_model |
The Sethi model was developed by Suresh P. Sethi and describes the process of how sales evolve over time in response to advertising. The model assumes that the rate of change in sales depend on three effects: response to advertising that acts positively on the unsold portion of the market, the loss due to forgetting or possibly due to competitive factors that act negatively on the sold portion of the market, and a random effect that can go either way.
Suresh Sethi published his paper "Deterministic and Stochastic Optimization of a Dynamic Advertising Model" in 1983. The Sethi model is a modification as well as a stochastic extension of the Vidale-Wolfe advertising model. The model and its competitive and multi-echelon channel extensions have been used extensively in the literature.
Moreover, some of these extensions have been also tested empirically.
== Model ==
The Sethi advertising model or simply the Sethi model provides a sales-advertising dynamics in the form of the following stochastic differential equation:
d
X
t
=
(
r
U
t
1
−
X
t
−
δ
X
t
)
d
t
+
σ
(
X
t
)
d
z
t
,
X
0
=
x
{\displaystyle dX_{t}=\left(rU_{t}{\sqrt {1-X_{t}}}-\delta X_{t}\right)\,dt+\sigma (X_{t})\,dz_{t},\qquad X_{0}=x}
.
Where:
X
t
{\displaystyle X_{t}}
is the market share at time
t
{\displaystyle t}
U
t
{\displaystyle U_{t}}
is the rate of advertising at time
t
{\displaystyle t}
r
{\displaystyle r}
is the coefficient of the effectiveness of advertising
δ
{\displaystyle \delta }
is the decay constant
σ
(
X
t
)
{\displaystyle \sigma (X_{t})}
is the diffusion coefficient
z
t
{\displaystyle z_{t}}
is the Wiener process (Standard Brownian motion);
d
z
t
{\displaystyle dz_{t}}
is known as White noise.
=== Explanation ===
The rate of change in sales depend on three effects: response to advertising that acts positively on the unsold portion of the market via
r
{\displaystyle r}
, the loss due to forgetting or possibly due to competitive factors that act negatively on the sold portion of the market via
δ
{\displaystyle \delta }
, and a random effect using a diffusion or White noise term that can go either way.
The coefficient
r
{\displaystyle r}
is the coefficient of the effectiveness of advertising innovation.
The coefficient
δ
{\displaystyle \delta }
is the decay constant.
The square-root term brings in the so-called word-of-mouth effect at least at low sales levels.
The diffusion term
σ
(
X
t
)
d
z
t
{\displaystyle \sigma (X_{t})dz_{t}}
brings in the random effect.
=== Example of an optimal advertising problem ===
Subject to the Sethi model above with the initial market share
x
{\displaystyle x}
, consider the following objective function:
V
(
x
)
=
max
U
t
≥
0
E
[
∫
0
∞
e
−
ρ
t
(
π
X
t
−
U
t
2
)
d
t
]
,
{\displaystyle V(x)=\max _{U_{t}\geq 0}\;E\left[\int _{0}^{\infty }e^{-\rho t}(\pi X_{t}-U_{t}^{2})\,dt\right],}
where
π
{\displaystyle \pi }
denotes the sales revenue corresponding to the total market, i.e., when
x
=
1
{\displaystyle x=1}
, and
ρ
>
0
{\displaystyle \rho >0}
denotes the discount rate.
The function
V
(
x
)
{\displaystyle V(x)}
is known as the value function for this problem, and it is shown to be
V
(
x
)
=
λ
¯
x
+
λ
¯
2
r
2
4
ρ
,
{\displaystyle V(x)={\bar {\lambda }}x+{\frac {{\bar {\lambda }}^{2}r^{2}}{4\rho }},}
where
λ
¯
=
(
ρ
+
δ
)
2
+
r
2
π
−
(
ρ
+
δ
)
r
2
/
2
.
{\displaystyle {\bar {\lambda }}={\frac {{\sqrt {(\rho +\delta )^{2}+r^{2}\pi }}-(\rho +\delta )}{r^{2}/2}}.}
The optimal control for this problem is
U
t
∗
=
u
∗
(
X
t
)
=
r
λ
¯
1
−
X
t
2
=
{
>
u
¯
if
X
t
<
x
¯
,
=
u
¯
if
X
t
=
x
¯
,
<
u
¯
if
X
t
>
x
¯
,
{\displaystyle U_{t}^{*}=u^{*}(X_{t})={\frac {r{\bar {\lambda }}{\sqrt {1-\ X_{t}}}}{2}}={\begin{cases}{}>{\bar {u}}&{\text{if }}X_{t}<{\bar {x}},\\{}={\bar {u}}&{\text{if }}X_{t}={\bar {x}},\\{}<{\bar {u}}&{\text{if }}X_{t}>{\bar {x}},\end{cases}}}
where
x
¯
=
r
2
λ
¯
/
2
r
2
λ
¯
/
2
+
δ
{\displaystyle {\bar {x}}={\frac {r^{2}{\bar {\lambda }}/2}{r^{2}{\bar {\lambda }}/2+\delta }}}
and
u
¯
=
r
λ
¯
1
−
x
¯
2
.
{\displaystyle {\bar {u}}={\frac {r{\bar {\lambda }}{\sqrt {1-{\bar {x}}}}}{2}}.}
== Extensions of the Sethi model ==
Competitive model: Nash differential games
Multi-echelon Model
Empirical testing of the Sethi model and extensions
Cooperative advertising: Stackelberg differential games
The Sethi durable goods model
== See also ==
Bass diffusion model
differential games
stochastic differential equation
diffusion of innovations
Stackleberg competition
Nash equilibrium
== References == | Wikipedia/Sethi_model |
The Kirkwood superposition approximation was introduced in 1935 by John G. Kirkwood as a means of representing a discrete probability distribution. The Kirkwood approximation for a discrete probability density function
P
(
x
1
,
x
2
,
…
,
x
n
)
{\displaystyle P(x_{1},x_{2},\ldots ,x_{n})}
is given by
P
′
(
x
1
,
x
2
,
…
,
x
n
)
=
∏
i
=
1
n
−
1
[
∏
T
i
⊆
V
p
(
T
i
)
]
(
−
1
)
n
−
1
−
i
=
∏
T
n
−
1
⊆
V
p
(
T
n
−
1
)
∏
T
n
−
2
⊆
V
p
(
T
n
−
2
)
⋮
∏
T
1
⊆
V
p
(
T
1
)
{\displaystyle P^{\prime }(x_{1},x_{2},\ldots ,x_{n})=\prod _{i=1}^{n-1}\left[\prod _{{\mathcal {T}}_{i}\subseteq {\mathcal {V}}}p({\mathcal {T}}_{i})\right]^{(-1)^{n-1-i}}={\frac {\prod _{{\mathcal {T}}_{n-1}\subseteq {\mathcal {V}}}p({\mathcal {T}}_{n-1})}{\frac {\prod _{{\mathcal {T}}_{n-2}\subseteq {\mathcal {V}}}p({\mathcal {T}}_{n-2})}{\frac {\vdots }{\prod _{{\mathcal {T}}_{1}\subseteq {\mathcal {V}}}p({\mathcal {T}}_{1})}}}}}
where
∏
T
i
⊆
V
p
(
T
i
)
{\displaystyle \prod _{{\mathcal {T}}_{i}\subseteq {\mathcal {V}}}p({\mathcal {T}}_{i})}
is the product of probabilities over all subsets of variables of size i in variable set
V
{\displaystyle \scriptstyle {\mathcal {V}}}
. This kind of formula has been considered by Watanabe (1960) and, according to Watanabe, also by Robert Fano. For the three-variable case, it reduces to simply
P
′
(
x
1
,
x
2
,
x
3
)
=
p
(
x
1
,
x
2
)
p
(
x
2
,
x
3
)
p
(
x
1
,
x
3
)
p
(
x
1
)
p
(
x
2
)
p
(
x
3
)
{\displaystyle P^{\prime }(x_{1},x_{2},x_{3})={\frac {p(x_{1},x_{2})p(x_{2},x_{3})p(x_{1},x_{3})}{p(x_{1})p(x_{2})p(x_{3})}}}
The Kirkwood approximation does not generally produce a valid probability distribution (the normalization condition is violated). Watanabe claims that for this reason informational expressions of this type are not meaningful, and indeed there has been very little written about the properties of this measure. The Kirkwood approximation is the probabilistic counterpart of the interaction information.
Judea Pearl (1988 §3.2.4) indicates that an expression of this type can be exact in the case of a decomposable model, that is, a probability distribution that admits a graph structure whose cliques form a tree. In such cases, the numerator contains the product of the intra-clique joint distributions and the denominator contains the product of the clique intersection distributions.
== References ==
Jakulin, A. & Bratko, I. (2004), Quantifying and visualizing attribute interactions: An approach based on entropy, Journal of Machine Learning Research, (submitted) pp. 38–43.
Matsuda, Hiroyuki (2000-09-01). "Physical nature of higher-order mutual information: Intrinsic correlations and frustration". Physical Review E. 62 (3). American Physical Society (APS): 3096–3102. Bibcode:2000PhRvE..62.3096M. doi:10.1103/physreve.62.3096. ISSN 1063-651X. PMID 11088803.
Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. San Mateo, CA: Morgan Kaufmann/Elsevier. doi:10.1016/c2009-0-27609-4. ISBN 978-0-08-051489-5.
Watanabe, Satosi (1960). "Information Theoretical Analysis of Multivariate Correlation". IBM Journal of Research and Development. 4 (1). IBM: 66–82. doi:10.1147/rd.41.0066. ISSN 0018-8646. | Wikipedia/Kirkwood_approximation |
In probability theory, Kolmogorov equations characterize continuous-time Markov processes. In particular, they describe how the probability of a continuous-time Markov process in a certain state changes over time. There are four distinct equations: the Kolmogorov forward equation for continuous processes, now understood to be identical to the Fokker–Planck equation, the Kolmogorov forward equation for jump processes, and two Kolmogorov backward equations for processes with and without discontinuous jumps.
== Diffusion processes vs. jump processes ==
Writing in 1931, Andrei Kolmogorov started from the theory of discrete time Markov processes, which are described by the Chapman–Kolmogorov equation, and sought to derive a theory of continuous time Markov processes by extending this equation. He found that there are two kinds of continuous time Markov processes, depending on the assumed behavior over small intervals of time:
If you assume that "in a small time interval there is an overwhelming probability that the state will remain unchanged; however, if it changes, the change may be radical", then you are led to what are called jump processes.
The other case leads to processes such as those "represented by diffusion and by Brownian motion; there it is certain that some change will occur in any time interval, however small; only, here it is certain that the changes during small time intervals will be also small".
For each of these two kinds of processes, Kolmogorov derived a forward and a backward system of equations (four in all).
== History ==
The equations are named after Andrei Kolmogorov since they were highlighted in his 1931 foundational work.
William Feller, in 1949, used the names "forward equation" and "backward equation" for his more general version of the Kolmogorov's pair,
in both jump and diffusion processes. Much later, in 1956, he referred to the equations for the jump process as "Kolmogorov forward equations" and "Kolmogorov backward equations".
Other authors, such as Motoo Kimura, referred to the diffusion (Fokker–Planck) equation as Kolmogorov forward equation, a name that has persisted.
== The modern view ==
In the context of a continuous-time Markov process with jumps, see Kolmogorov equations (Markov jump process). In particular, in natural sciences the forward equation is also known as master equation.
In the context of a diffusion process, for the backward Kolmogorov equations see Kolmogorov backward equations (diffusion). The forward Kolmogorov equation is also known as Fokker–Planck equation.
== Continuous-time Markov chains ==
The original derivation of the equations by Kolmogorov starts with the Chapman–Kolmogorov equation (Kolmogorov called it fundamental equation) for time-continuous and differentiable Markov processes on a finite, discrete state space. In this formulation, it is assumed that the probabilities
P
(
x
,
s
;
y
,
t
)
{\displaystyle P(x,s;y,t)}
are continuous and differentiable functions of
t
>
s
{\displaystyle t>s}
, where
x
,
y
∈
Ω
{\displaystyle x,y\in \Omega }
(the state space) and
t
>
s
,
t
,
s
∈
R
≥
0
{\displaystyle t>s,t,s\in \mathbb {R} _{\geq 0}}
are the final and initial times, respectively. Also, adequate limit properties for the derivatives are assumed. Feller derives the equations under slightly different conditions, starting with the concept of purely discontinuous Markov process and then formulating them for more general state spaces. Feller proves the existence of solutions of probabilistic character to the Kolmogorov forward equations and Kolmogorov backward equations under natural conditions.
For the case of a countable state space we put
i
,
j
{\displaystyle i,j}
in place of
x
,
y
{\displaystyle x,y}
.
The Kolmogorov forward equations read
∂
P
i
j
∂
t
(
s
;
t
)
=
∑
k
P
i
k
(
s
;
t
)
A
k
j
(
t
)
{\displaystyle {\frac {\partial P_{ij}}{\partial t}}(s;t)=\sum _{k}P_{ik}(s;t)A_{kj}(t)}
,
where
A
(
t
)
{\displaystyle A(t)}
is the transition rate matrix (also known as the generator matrix),
while the Kolmogorov backward equations are
∂
P
i
j
∂
s
(
s
;
t
)
=
−
∑
k
P
k
j
(
s
;
t
)
A
i
k
(
s
)
{\displaystyle {\frac {\partial P_{ij}}{\partial s}}(s;t)=-\sum _{k}P_{kj}(s;t)A_{ik}(s)}
The functions
P
i
j
(
s
;
t
)
{\displaystyle P_{ij}(s;t)}
are continuous and differentiable in both time arguments. They represent the
probability that the system that was in state
i
{\displaystyle i}
at time
s
{\displaystyle s}
jumps to state
j
{\displaystyle j}
at some later time
t
>
s
{\displaystyle t>s}
. The continuous quantities
A
i
j
(
t
)
{\displaystyle A_{ij}(t)}
satisfy
A
i
j
(
t
)
=
[
∂
P
i
j
∂
u
(
t
;
u
)
]
u
=
t
,
A
j
k
(
t
)
≥
0
,
j
≠
k
,
∑
k
A
j
k
(
t
)
=
0.
{\displaystyle A_{ij}(t)=\left[{\frac {\partial P_{ij}}{\partial u}}(t;u)\right]_{u=t},\quad A_{jk}(t)\geq 0,\ j\neq k,\quad \sum _{k}A_{jk}(t)=0.}
=== Relation with the generating function ===
Still in the discrete state case, letting
s
=
0
{\displaystyle s=0}
and assuming that the system initially is found in state
i
{\displaystyle i}
, the Kolmogorov forward equations describe an initial-value problem for finding the probabilities of the process, given the quantities
A
j
k
(
t
)
{\displaystyle A_{jk}(t)}
. We write
p
k
(
t
)
=
P
i
k
(
0
;
t
)
{\displaystyle p_{k}(t)=P_{ik}(0;t)}
where
∑
k
p
k
(
t
)
=
1
{\displaystyle \sum _{k}p_{k}(t)=1}
, then
d
p
k
d
t
(
t
)
=
∑
j
A
j
k
(
t
)
p
j
(
t
)
;
p
k
(
0
)
=
δ
i
k
,
k
=
0
,
1
,
…
.
{\displaystyle {\frac {dp_{k}}{dt}}(t)=\sum _{j}A_{jk}(t)p_{j}(t);\quad p_{k}(0)=\delta _{ik},\qquad k=0,1,\dots .}
For the case of a pure death process with constant rates the only nonzero coefficients are
A
j
,
j
−
1
=
μ
j
,
j
≥
1
{\displaystyle A_{j,j-1}=\mu _{j},\ j\geq 1}
. Letting
Ψ
(
x
,
t
)
=
∑
k
x
k
p
k
(
t
)
,
{\displaystyle \Psi (x,t)=\sum _{k}x^{k}p_{k}(t),\quad }
the system of equations can in this case be recast as a partial differential equation for
Ψ
(
x
,
t
)
{\displaystyle {\Psi }(x,t)}
with initial condition
Ψ
(
x
,
0
)
=
x
i
{\displaystyle \Psi (x,0)=x^{i}}
. After some manipulations, the system of equations reads,
∂
Ψ
∂
t
(
x
,
t
)
=
μ
(
1
−
x
)
∂
Ψ
∂
x
(
x
,
t
)
;
Ψ
(
x
,
0
)
=
x
i
,
Ψ
(
1
,
t
)
=
1.
{\displaystyle {\frac {\partial \Psi }{\partial t}}(x,t)=\mu (1-x){\frac {\partial {\Psi }}{\partial x}}(x,t);\qquad \Psi (x,0)=x^{i},\quad \Psi (1,t)=1.}
== An example from biology ==
One example from biology is given below:
p
n
′
(
t
)
=
(
n
−
1
)
β
p
n
−
1
(
t
)
−
n
β
p
n
(
t
)
{\displaystyle p_{n}'(t)=(n-1)\beta p_{n-1}(t)-n\beta p_{n}(t)}
This equation is applied to model population growth with birth. Where
n
{\displaystyle n}
is the population index, with reference the initial population,
β
{\displaystyle \beta }
is the birth rate, and finally
p
n
(
t
)
=
Pr
(
N
(
t
)
=
n
)
{\displaystyle p_{n}(t)=\Pr(N(t)=n)}
, i.e. the probability of achieving a certain population size.
The analytical solution is:
p
n
(
t
)
=
(
n
−
1
)
β
e
−
n
β
t
∫
0
t
p
n
−
1
(
s
)
e
n
β
s
d
s
{\displaystyle p_{n}(t)=(n-1)\beta e^{-n\beta t}\int _{0}^{t}\!p_{n-1}(s)\,e^{n\beta s}\mathrm {d} s}
This is a formula for the probability
p
n
(
t
)
{\displaystyle p_{n}(t)}
in terms of the preceding ones, i.e.
p
n
−
1
(
t
)
{\displaystyle p_{n-1}(t)}
.
== See also ==
Feynman-Kac formula
Fokker-Planck equation
Kolmogorov backward equation
== References == | Wikipedia/Kolmogorov_backward_equation |
In mathematics, the base flow of a random dynamical system is the dynamical system defined on the "noise" probability space that describes how to "fast forward" or "rewind" the noise when one wishes to change the time at which one "starts" the random dynamical system.
== Definition ==
In the definition of a random dynamical system, one is given a family of maps
ϑ
s
:
Ω
→
Ω
{\displaystyle \vartheta _{s}:\Omega \to \Omega }
on a probability space
(
Ω
,
F
,
P
)
{\displaystyle (\Omega ,{\mathcal {F}},\mathbb {P} )}
. The measure-preserving dynamical system
(
Ω
,
F
,
P
,
ϑ
)
{\displaystyle (\Omega ,{\mathcal {F}},\mathbb {P} ,\vartheta )}
is known as the base flow of the random dynamical system. The maps
ϑ
s
{\displaystyle \vartheta _{s}}
are often known as shift maps since they "shift" time. The base flow is often ergodic.
The parameter
s
{\displaystyle s}
may be chosen to run over
R
{\displaystyle \mathbb {R} }
(a two-sided continuous-time dynamical system);
[
0
,
+
∞
)
⊊
R
{\displaystyle [0,+\infty )\subsetneq \mathbb {R} }
(a one-sided continuous-time dynamical system);
Z
{\displaystyle \mathbb {Z} }
(a two-sided discrete-time dynamical system);
N
∪
{
0
}
{\displaystyle \mathbb {N} \cup \{0\}}
(a one-sided discrete-time dynamical system).
Each map
ϑ
s
{\displaystyle \vartheta _{s}}
is required
to be a
(
F
,
F
)
{\displaystyle ({\mathcal {F}},{\mathcal {F}})}
-measurable function: for all
E
∈
F
{\displaystyle E\in {\mathcal {F}}}
,
ϑ
s
−
1
(
E
)
∈
F
{\displaystyle \vartheta _{s}^{-1}(E)\in {\mathcal {F}}}
to preserve the measure
P
{\displaystyle \mathbb {P} }
: for all
E
∈
F
{\displaystyle E\in {\mathcal {F}}}
,
P
(
ϑ
s
−
1
(
E
)
)
=
P
(
E
)
{\displaystyle \mathbb {P} (\vartheta _{s}^{-1}(E))=\mathbb {P} (E)}
.
Furthermore, as a family, the maps
ϑ
s
{\displaystyle \vartheta _{s}}
satisfy the relations
ϑ
0
=
i
d
Ω
:
Ω
→
Ω
{\displaystyle \vartheta _{0}=\mathrm {id} _{\Omega }:\Omega \to \Omega }
, the identity function on
Ω
{\displaystyle \Omega }
;
ϑ
s
∘
ϑ
t
=
ϑ
s
+
t
{\displaystyle \vartheta _{s}\circ \vartheta _{t}=\vartheta _{s+t}}
for all
s
{\displaystyle s}
and
t
{\displaystyle t}
for which the three maps in this expression are defined. In particular,
ϑ
s
−
1
=
ϑ
−
s
{\displaystyle \vartheta _{s}^{-1}=\vartheta _{-s}}
if
−
s
{\displaystyle -s}
exists.
In other words, the maps
ϑ
s
{\displaystyle \vartheta _{s}}
form a commutative monoid (in the cases
s
∈
N
∪
{
0
}
{\displaystyle s\in \mathbb {N} \cup \{0\}}
and
s
∈
[
0
,
+
∞
)
{\displaystyle s\in [0,+\infty )}
) or a commutative group (in the cases
s
∈
Z
{\displaystyle s\in \mathbb {Z} }
and
s
∈
R
{\displaystyle s\in \mathbb {R} }
).
== Example ==
In the case of random dynamical system driven by a Wiener process
W
:
R
×
Ω
→
X
{\displaystyle W:\mathbb {R} \times \Omega \to X}
, where
(
Ω
,
F
,
P
)
{\displaystyle (\Omega ,{\mathcal {F}},\mathbb {P} )}
is the two-sided classical Wiener space, the base flow
ϑ
s
:
Ω
→
Ω
{\displaystyle \vartheta _{s}:\Omega \to \Omega }
would be given by
W
(
t
,
ϑ
s
(
ω
)
)
=
W
(
t
+
s
,
ω
)
−
W
(
s
,
ω
)
{\displaystyle W(t,\vartheta _{s}(\omega ))=W(t+s,\omega )-W(s,\omega )}
.
This can be read as saying that
ϑ
s
{\displaystyle \vartheta _{s}}
"starts the noise at time
s
{\displaystyle s}
instead of time 0".
== References == | Wikipedia/Base_flow_(random_dynamical_systems) |
In mathematics — specifically, in large deviations theory — a rate function is a function used to quantify the probabilities of rare events. Such functions are used to formulate large deviation principles. A large deviation principle quantifies the asymptotic probability of rare events for a sequence of probabilities.
A rate function is also called a Cramér function, after the Swedish probabilist Harald Cramér.
== Definitions ==
Rate function An extended real-valued function
I
:
X
→
[
0
,
+
∞
]
{\displaystyle I:X\to [0,+\infty ]}
defined on a Hausdorff topological space
X
{\displaystyle X}
is said to be a rate function if it is not identically
+
∞
{\displaystyle +\infty }
and is lower semi-continuous i.e. all the sub-level sets
{
x
∈
X
∣
I
(
x
)
≤
c
}
for
c
≥
0
{\displaystyle \{x\in X\mid I(x)\leq c\}{\mbox{ for }}c\geq 0}
are closed in
X
{\displaystyle X}
.
If, furthermore, they are compact, then
I
{\displaystyle I}
is said to be a good rate function.
A family of probability measures
(
μ
δ
)
δ
>
0
{\displaystyle (\mu _{\delta })_{\delta >0}}
on
X
{\displaystyle X}
is said to satisfy the large deviation principle with rate function
I
:
X
→
[
0
,
+
∞
)
{\displaystyle I:X\to [0,+\infty )}
(and rate
1
/
δ
{\displaystyle 1/\delta }
) if, for every closed set
F
⊆
X
{\displaystyle F\subseteq X}
and every open set
G
⊆
X
{\displaystyle G\subseteq X}
,
lim sup
δ
↓
0
δ
log
μ
δ
(
F
)
≤
−
inf
x
∈
F
I
(
x
)
,
(U)
{\displaystyle \limsup _{\delta \downarrow 0}\delta \log \mu _{\delta }(F)\leq -\inf _{x\in F}I(x),\quad {\mbox{(U)}}}
lim inf
δ
↓
0
δ
log
μ
δ
(
G
)
≥
−
inf
x
∈
G
I
(
x
)
.
(L)
{\displaystyle \liminf _{\delta \downarrow 0}\delta \log \mu _{\delta }(G)\geq -\inf _{x\in G}I(x).\quad {\mbox{(L)}}}
If the upper bound (U) holds only for compact (instead of closed) sets
F
{\displaystyle F}
, then
(
μ
δ
)
δ
>
0
{\displaystyle (\mu _{\delta })_{\delta >0}}
is said to satisfy the weak large deviations principle (with rate
1
/
δ
{\displaystyle 1/\delta }
and weak rate function
I
{\displaystyle I}
).
=== Remarks ===
The role of the open and closed sets in the large deviation principle is similar to their role in the weak convergence of probability measures: recall that
(
μ
δ
)
δ
>
0
{\displaystyle (\mu _{\delta })_{\delta >0}}
is said to converge weakly to
μ
{\displaystyle \mu }
if, for every closed set
F
⊆
X
{\displaystyle F\subseteq X}
and every open set
G
⊆
X
{\displaystyle G\subseteq X}
,
lim sup
δ
↓
0
μ
δ
(
F
)
≤
μ
(
F
)
,
{\displaystyle \limsup _{\delta \downarrow 0}\mu _{\delta }(F)\leq \mu (F),}
lim inf
δ
↓
0
μ
δ
(
G
)
≥
μ
(
G
)
.
{\displaystyle \liminf _{\delta \downarrow 0}\mu _{\delta }(G)\geq \mu (G).}
There is some variation in the nomenclature used in the literature: for example, den Hollander (2000) uses simply "rate function" where this article — following Dembo & Zeitouni (1998) — uses "good rate function", and "weak rate function". Rassoul-Agha & Seppäläinen (2015) uses the term "tight rate function" instead of "good rate function" due to the connection with exponential tightness of a family of measures. Regardless of the nomenclature used for rate functions, examination of whether the upper bound inequality (U) is supposed to hold for closed or compact sets tells one whether the large deviation principle in use is strong or weak.
== Properties ==
=== Uniqueness ===
A natural question to ask, given the somewhat abstract setting of the general framework above, is whether the rate function is unique. This turns out to be the case: given a sequence of probability measures (μδ)δ>0 on X satisfying the large deviation principle for two rate functions I and J, it follows that I(x) = J(x) for all x ∈ X.
=== Exponential tightness ===
It is possible to convert a weak large deviation principle into a strong one if the measures converge sufficiently quickly. If the upper bound holds for compact sets F and the sequence of measures (μδ)δ>0 is exponentially tight, then the upper bound also holds for closed sets F. In other words, exponential tightness enables one to convert a weak large deviation principle into a strong one.
=== Continuity ===
Naïvely, one might try to replace the two inequalities (U) and (L) by the single requirement that, for all Borel sets S ⊆ X,
lim
δ
↓
0
δ
log
μ
δ
(
S
)
=
−
inf
x
∈
S
I
(
x
)
.
(E)
{\displaystyle \lim _{\delta \downarrow 0}\delta \log \mu _{\delta }(S)=-\inf _{x\in S}I(x).\quad {\mbox{(E)}}}
The equality (E) is far too restrictive, since many interesting examples satisfy (U) and (L) but not (E). For example, the measure μδ might be non-atomic for all δ, so the equality (E) could hold for S = {x} only if I were identically +∞, which is not permitted in the definition. However, the inequalities (U) and (L) do imply the equality (E) for so-called I-continuous sets S ⊆ X, those for which
I
(
S
∘
)
=
I
(
S
¯
)
,
{\displaystyle I{\big (}{\stackrel {\circ }{S}}{\big )}=I{\big (}{\bar {S}}{\big )},}
where
S
∘
{\displaystyle {\stackrel {\circ }{S}}}
and
S
¯
{\displaystyle {\bar {S}}}
denote the interior and closure of S in X respectively. In many examples, many sets/events of interest are I-continuous. For example, if I is a continuous function, then all sets S such that
S
⊆
S
∘
¯
{\displaystyle S\subseteq {\bar {\stackrel {\circ }{S}}}}
are I-continuous; all open sets, for example, satisfy this containment.
=== Transformation of large deviation principles ===
Given a large deviation principle on one space, it is often of interest to be able to construct a large deviation principle on another space. There are several results in this area:
the contraction principle tells one how a large deviation principle on one space "pushes forward" (via the pushforward of a probability measure) to a large deviation principle on another space via a continuous function;
the Dawson-Gärtner theorem tells one how a sequence of large deviation principles on a sequence of spaces passes to the projective limit.
the tilted large deviation principle gives a large deviation principle for integrals of exponential functionals.
exponentially equivalent measures have the same large deviation principles.
== History and basic development ==
The notion of a rate function emerged in the 1930s with the Swedish mathematician Harald Cramér's study of a sequence of i.i.d. random variables (Zi)i∈
N
{\displaystyle \mathbb {N} }
. Namely, among some considerations of scaling, Cramér studied the behavior of the distribution of the average
X
n
=
1
n
∑
i
=
1
n
Z
i
{\textstyle X_{n}={\frac {1}{n}}\sum _{i=1}^{n}Z_{i}}
as n→∞. He found that the tails of the distribution of Xn decay exponentially as e−nλ(x) where the factor λ(x) in the exponent is the Legendre–Fenchel transform (a.k.a. the convex conjugate) of the cumulant-generating function
Ψ
Z
(
t
)
=
log
E
e
t
Z
.
{\displaystyle \Psi _{Z}(t)=\log \operatorname {E} e^{tZ}.}
For this reason this particular function λ(x) is sometimes called the Cramér function. The rate function defined above in this article is a broad generalization of this notion of Cramér's, defined more abstractly on a probability space, rather than the state space of a random variable.
== See also ==
Extreme value theory
Moderate deviation principle
== References ==
Dembo, Amir; Zeitouni, Ofer (1998). Large deviations techniques and applications. Applications of Mathematics (New York) 38 (Second ed.). New York: Springer-Verlag. xvi+396. ISBN 0-387-98406-2. MR1619036
den Hollander, Frank (2000). Large deviations. Fields Institute Monographs 14. Providence, RI: American Mathematical Society. p. x+143. ISBN 0-8218-1989-5. MR1739680
Rassoul-Agha, Firas; Seppäläinen, Timo (2015). A course on large deviations with an introduction to Gibbs measures. Graduate Studies in Mathematics 162. Providence, RI: American Mathematical Society. xiv+318. ISBN 978-0-8218-7578-0. MR3309619 | Wikipedia/Rate_function |
A random function – of either one variable (a random process), or two or more variables
(a random field) – is called Gaussian if every finite-dimensional distribution is a multivariate normal distribution. Gaussian random fields on the sphere are useful (for example) when analysing
the anomalies in the cosmic microwave background radiation (see, pp. 8–9);
brain images obtained by positron emission tomography (see, pp. 9–10).
Sometimes, a value of a Gaussian random function deviates from its expected value by several standard deviations. This is a large deviation. Though rare in a small domain (of space or/and time), large deviations may be quite usual in a large domain.
== Basic statement ==
Let
M
{\displaystyle M}
be the maximal value of a Gaussian random function
X
{\displaystyle X}
on the
(two-dimensional) sphere. Assume that the expected value of
X
{\displaystyle X}
is
0
{\displaystyle 0}
(at every point of the sphere), and the standard deviation of
X
{\displaystyle X}
is
1
{\displaystyle 1}
(at every point of the sphere). Then, for large
a
>
0
{\displaystyle a>0}
,
P
(
M
>
a
)
{\displaystyle P(M>a)}
is close to
C
a
exp
(
−
a
2
/
2
)
+
2
P
(
ξ
>
a
)
{\displaystyle Ca\exp(-a^{2}/2)+2P(\xi >a)}
,
where
ξ
{\displaystyle \xi }
is distributed
N
(
0
,
1
)
{\displaystyle N(0,1)}
(the standard normal distribution), and
C
{\displaystyle C}
is a constant; it does not depend on
a
{\displaystyle a}
, but depends on the correlation function of
X
{\displaystyle X}
(see below). The relative error of the approximation decays exponentially for large
a
{\displaystyle a}
.
The constant
C
{\displaystyle C}
is easy to determine in the important special case described in terms of the directional derivative of
X
{\displaystyle X}
at a given point (of the sphere) in a given direction (tangential to the sphere). The derivative is random, with zero expectation and some standard deviation. The latter may depend on the point and the direction. However, if it does not depend, then it is equal to
(
π
/
2
)
1
/
4
C
1
/
2
{\displaystyle (\pi /2)^{1/4}C^{1/2}}
(for the sphere of radius
1
{\displaystyle 1}
).
The coefficient
2
{\displaystyle 2}
before
P
(
ξ
>
a
)
{\displaystyle P(\xi >a)}
is in fact the Euler characteristic of the sphere (for the torus it vanishes).
It is assumed that
X
{\displaystyle X}
is twice continuously differentiable (almost surely), and reaches its maximum at a single point (almost surely).
== The clue: mean Euler characteristic ==
The clue to the theory sketched above is, Euler characteristic
χ
a
{\displaystyle \chi _{a}}
of the set
{
X
>
a
}
{\displaystyle \{X>a\}}
of all points
t
{\displaystyle t}
(of the sphere) such that
X
(
t
)
>
a
{\displaystyle X(t)>a}
. Its expected value (in other words, mean value)
E
(
χ
a
)
{\displaystyle E(\chi _{a})}
can be calculated explicitly:
E
(
χ
a
)
=
C
a
exp
(
−
a
2
/
2
)
+
2
P
(
ξ
>
a
)
{\displaystyle E(\chi _{a})=Ca\exp(-a^{2}/2)+2P(\xi >a)}
(which is far from being trivial, and involves Poincaré–Hopf theorem, Gauss–Bonnet theorem, Rice's formula etc.).
The set
{
X
>
a
}
{\displaystyle \{X>a\}}
is the empty set whenever
M
<
a
{\displaystyle M<a}
; in this case
χ
a
=
0
{\displaystyle \chi _{a}=0}
. In the other case, when
M
>
a
{\displaystyle M>a}
, the set
{
X
>
a
}
{\displaystyle \{X>a\}}
is non-empty; its Euler characteristic may take various values, depending on the topology of the set (the number of connected components, and possible holes in these components). However, if
a
{\displaystyle a}
is large and
M
>
a
{\displaystyle M>a}
then the set
{
X
>
a
}
{\displaystyle \{X>a\}}
is usually a small, slightly deformed disk or ellipse (which is easy to guess, but quite difficult to prove). Thus, its Euler characteristic
χ
a
{\displaystyle \chi _{a}}
is usually equal to
1
{\displaystyle 1}
(given that
M
>
a
{\displaystyle M>a}
). This is why
E
(
χ
a
)
{\displaystyle E(\chi _{a})}
is close to
P
(
M
>
a
)
{\displaystyle P(M>a)}
.
== See also ==
Gaussian process
Gaussian random field
Large deviations theory
== Further reading ==
The basic statement given above is a simple special case of a much more general (and difficult) theory stated by Adler. For a detailed presentation of this special case see Tsirelson's lectures. | Wikipedia/Large_deviations_of_Gaussian_random_functions |
In mathematics — specifically, in measure theory — Malliavin's absolute continuity lemma is a result due to the French mathematician Paul Malliavin that plays a foundational rôle in the regularity (smoothness) theorems of the Malliavin calculus. Malliavin's lemma gives a sufficient condition for a finite Borel measure to be absolutely continuous with respect to Lebesgue measure.
== Statement of the lemma ==
Let μ be a finite Borel measure on n-dimensional Euclidean space Rn. Suppose that, for every x ∈ Rn, there exists a constant C = C(x) such that
|
∫
R
n
D
φ
(
y
)
(
x
)
d
μ
(
y
)
|
≤
C
(
x
)
‖
φ
‖
∞
{\displaystyle \left|\int _{\mathbf {R} ^{n}}\mathrm {D} \varphi (y)(x)\,\mathrm {d} \mu (y)\right|\leq C(x)\|\varphi \|_{\infty }}
for every C∞ function φ : Rn → R with compact support. Then μ is absolutely continuous with respect to n-dimensional Lebesgue measure λn on Rn. In the above, Dφ(y) denotes the Fréchet derivative of φ at y and ||φ||∞ denotes the supremum norm of φ.
== References ==
Bell, Denis R. (2006). The Malliavin calculus. Mineola, NY: Dover Publications Inc. pp. x+113. ISBN 0-486-44994-7. MR2250060 (See section 1.3)
Malliavin, Paul (1978). "Stochastic calculus of variations and hypoelliptic operators". Proceedings of the International Symposium on Stochastic Differential Equations (Res. Inst. Math. Sci., Kyoto Univ., Kyoto, 1976). New York: Wiley. pp. 195–263. MR536013 | Wikipedia/Malliavin's_absolute_continuity_lemma |
The Watts–Strogatz model is a random graph generation model that produces graphs with small-world properties, including short average path lengths and high clustering. It was proposed by Duncan J. Watts and Steven Strogatz in their article published in 1998 in the Nature scientific journal. The model also became known as the (Watts) beta model after Watts used
β
{\displaystyle \beta }
to formulate it in his popular science book Six Degrees.
== Rationale for the model ==
The formal study of random graphs dates back to the work of Paul Erdős and Alfréd Rényi. The graphs they considered, now known as the classical or Erdős–Rényi (ER) graphs, offer a simple and powerful model with many applications.
However the ER graphs do not have two important properties observed in many real-world networks:
They do not generate local clustering and triadic closures. Instead, because they have a constant, random, and independent probability of two nodes being connected, ER graphs have a low clustering coefficient.
They do not account for the formation of hubs. Formally, the degree distribution of ER graphs converges to a Poisson distribution, rather than a power law observed in many real-world, scale-free networks.
The Watts and Strogatz model was designed as the simplest possible model that addresses the first of the two limitations. It accounts for clustering while retaining the short average path lengths of the ER model. It does so by interpolating between a randomized structure close to ER graphs and a regular ring lattice. Consequently, the model is able to at least partially explain the "small-world" phenomena in a variety of networks, such as the power grid, neural network of C. elegans, networks of movie actors, or fat-metabolism communication in budding yeast.
== Algorithm ==
Given the desired number of nodes
N
{\displaystyle N}
, the mean degree
K
{\displaystyle K}
(assumed to be an even integer), and a parameter
β
{\displaystyle \beta }
, all satisfying
0
≤
β
≤
1
{\displaystyle 0\leq \beta \leq 1}
and
N
≫
K
≫
ln
N
≫
1
{\displaystyle N\gg K\gg \ln N\gg 1}
, the model constructs an undirected graph with
N
{\displaystyle N}
nodes and
N
K
2
{\displaystyle {\frac {NK}{2}}}
edges in the following way:
Construct a regular ring lattice, a graph with
N
{\displaystyle N}
nodes each connected to
K
{\displaystyle K}
neighbors,
K
/
2
{\displaystyle K/2}
on each side. That is, if the nodes are labeled
0
…
N
−
1
,
{\displaystyle 0\ldots {N-1},}
there is an edge
(
i
,
j
)
{\displaystyle (i,j)}
if and only if
0
<
|
i
−
j
|
m
o
d
(
N
−
1
−
K
2
)
≤
K
2
.
{\displaystyle 0<|i-j|\ \mathrm {mod} \ \left(N-1-{\frac {K}{2}}\right)\leq {\frac {K}{2}}.}
For every node
i
=
0
,
…
,
N
−
1
{\displaystyle i=0,\dots ,{N-1}}
take every edge connecting
i
{\displaystyle i}
to its
K
/
2
{\displaystyle K/2}
rightmost neighbors, that is every edge
(
i
,
j
)
{\displaystyle (i,j)}
such that
0
<
(
j
−
i
)
m
o
d
N
≤
K
/
2
{\displaystyle 0<(j-i)\ \mathrm {mod} \ N\leq K/2}
, and rewire it with probability
β
{\displaystyle \beta }
. Rewiring is done by replacing
(
i
,
j
)
{\displaystyle (i,j)}
with
(
i
,
k
)
{\displaystyle (i,k)}
where
k
{\displaystyle k}
is chosen uniformly at random from all possible nodes while avoiding self-loops (
k
≠
i
{\displaystyle k\neq i}
) and link duplication (there is no edge
(
i
,
k
′
)
{\displaystyle (i,{k'})}
with
k
′
=
k
{\displaystyle k'=k}
at this point in the algorithm).
== Properties ==
The underlying lattice structure of the model produces a locally clustered network, while the randomly rewired links dramatically reduce the average path lengths. The algorithm introduces about
β
N
K
2
{\displaystyle \beta {\frac {NK}{2}}}
of such non-lattice edges. Varying
β
{\displaystyle \beta }
makes it possible to interpolate between a regular lattice (
β
=
0
{\displaystyle \beta =0}
) and a structure close to an Erdős–Rényi random graph
G
(
N
,
p
)
{\displaystyle G(N,p)}
with
p
=
K
N
−
1
{\displaystyle p={\frac {K}{N-1}}}
at
β
=
1
{\displaystyle \beta =1}
. It does not approach the actual ER model since every node will be connected to at least
K
/
2
{\displaystyle K/2}
other nodes.
The three properties of interest are the average path length, the clustering coefficient, and the degree distribution.
=== Average path length ===
For a ring lattice, the average path length is
ℓ
(
0
)
≈
N
/
2
K
≫
1
{\displaystyle \ell (0)\approx N/2K\gg 1}
and scales linearly with the system size. In the limiting case of
β
→
1
{\displaystyle \beta \rightarrow 1}
, the graph approaches a random graph with
ℓ
(
1
)
≈
ln
N
ln
K
{\displaystyle \ell (1)\approx {\frac {\ln N}{\ln K}}}
, while not actually converging to it. In the intermediate region
0
<
β
<
1
{\displaystyle 0<\beta <1}
, the average path length falls very rapidly with increasing
β
{\displaystyle \beta }
, quickly approaching its limiting value.
=== Clustering coefficient ===
For the ring lattice the clustering coefficient
C
(
0
)
=
3
(
K
−
2
)
4
(
K
−
1
)
{\displaystyle C(0)={\frac {3(K-2)}{4(K-1)}}}
, and so tends to
3
/
4
{\displaystyle 3/4}
as
K
{\displaystyle K}
grows, independently of the system size. In the limiting case of
β
→
1
{\displaystyle \beta \rightarrow 1}
the clustering coefficient is of the same order as the clustering coefficient for classical random graphs,
C
=
K
/
(
N
−
1
)
{\displaystyle C=K/(N-1)}
and is thus inversely proportional to the system size. In the intermediate region the clustering coefficient remains quite close to its value for the regular lattice, and only falls at relatively high
β
{\displaystyle \beta }
. This results in a region where the average path length falls rapidly, but the clustering coefficient does not, explaining the "small-world" phenomenon.
If we use the Barrat and Weigt measure for clustering
C
′
(
β
)
{\displaystyle C'(\beta )}
defined as the fraction between the average number of edges between the neighbors of a node and the average number of possible edges between these neighbors, or, alternatively,
C
′
(
β
)
≡
3
×
number of triangles
number of connected triples
{\displaystyle C'(\beta )\equiv {\frac {3\times {\text{number of triangles}}}{\text{number of connected triples}}}}
then we get
C
′
(
β
)
∼
C
(
0
)
(
1
−
β
)
3
.
{\displaystyle C'(\beta )\sim C(0)(1-\beta )^{3}.}
=== Degree distribution ===
The degree distribution in the case of the ring lattice is just a Dirac delta function centered at
K
{\displaystyle K}
. The degree distribution for a large number of nodes and
0
<
β
<
1
{\displaystyle 0<\beta <1}
can be written as,
P
(
k
)
≈
∑
n
=
0
f
(
k
,
K
)
(
K
/
2
n
)
(
1
−
β
)
n
β
K
/
2
−
n
(
β
K
/
2
)
k
−
K
/
2
−
n
(
k
−
K
/
2
−
n
)
!
e
−
β
K
/
2
,
{\displaystyle P(k)\approx \sum _{n=0}^{f(k,K)}{{K/2} \choose {n}}(1-\beta )^{n}\beta ^{K/2-n}{\frac {(\beta K/2)^{k-K/2-n}}{(k-K/2-n)!}}e^{-\beta K/2},}
where
k
i
{\displaystyle k_{i}}
is the number of edges that the
i
th
{\displaystyle i^{\text{th}}}
node has or its degree. Here
k
≥
K
/
2
{\displaystyle k\geq K/2}
, and
f
(
k
,
K
)
=
min
(
k
−
K
/
2
,
K
/
2
)
{\displaystyle f(k,K)=\min(k-K/2,K/2)}
. The shape of the degree distribution is similar to that of a random graph and has a pronounced peak at
k
=
K
{\displaystyle k=K}
and decays exponentially for large
|
k
−
K
|
{\displaystyle |k-K|}
. The topology of the network is relatively homogeneous, meaning that all nodes are of similar degree.
== Limitations ==
The major limitation of the model is that it produces an unrealistic degree distribution. In contrast, real networks are often scale-free networks inhomogeneous in degree, having hubs and a scale-free degree distribution. Such networks are better described in that respect by the preferential attachment family of models, such as the Barabási–Albert (BA) model. (On the other hand, the Barabási–Albert model fails to produce the high levels of clustering seen in real networks, a shortcoming not shared by the Watts and Strogatz model. Thus, neither the Watts and Strogatz model nor the Barabási–Albert model should be viewed as fully realistic.)
The Watts and Strogatz model also implies a fixed number of nodes and thus cannot be used to model network growth.
== See also ==
Small-world networks
Erdős–Rényi (ER) model
Barabási–Albert model
Social networks
== References == | Wikipedia/Watts_and_Strogatz_model |
In probability theory, if a large number of events are all independent of one another and each has probability less than 1, then there is a positive (possibly small) probability that none of the events will occur. The Lovász local lemma allows a slight relaxation of the independence condition: As long as the events are "mostly" independent from one another and aren't individually too likely, then there will still be a positive probability that none of them occurs. This lemma is most commonly used in the probabilistic method, in particular to give existence proofs.
There are several different versions of the lemma. The simplest and most frequently used is the symmetric version given below. A weaker version was proved in 1975 by László Lovász and Paul Erdős in the article Problems and results on 3-chromatic hypergraphs and some related questions. For other versions, see Alon & Spencer (2000). In 2020, Robin Moser and Gábor Tardos received the Gödel Prize for their algorithmic version of the Lovász Local Lemma, which uses entropy compression to provide an efficient randomized algorithm for finding an outcome in which none of the events occurs.
== Statements of the lemma (symmetric version) ==
Let
A
1
,
A
2
,
…
,
A
k
{\displaystyle A_{1},A_{2},\dots ,A_{k}}
be a sequence of events such that each event occurs with probability at most p and such that each event is independent of all the other events except for at most d of them.
Lemma I (Lovász and Erdős 1973; published 1975) If
4
p
d
≤
1
{\displaystyle 4pd\leq 1}
then there is a nonzero probability that none of the events occurs.
Lemma II (Lovász 1977; published by Joel Spencer) If
e
p
(
d
+
1
)
≤
1
,
{\displaystyle ep(d+1)\leq 1,}
where e = 2.718... is the base of natural logarithms, then there is a nonzero probability that none of the events occurs.
Lemma II today is usually referred to as "Lovász local lemma".
Lemma III (Shearer 1985) If
{
p
<
(
d
−
1
)
d
−
1
d
d
d
>
1
p
<
1
2
d
=
1
{\displaystyle {\begin{cases}p<{\frac {(d-1)^{d-1}}{d^{d}}}&d>1\\p<{\tfrac {1}{2}}&d=1\end{cases}}}
then there is a nonzero probability that none of the events occurs.
The threshold in Lemma III is optimal and it implies that the bound
e
p
d
≤
1
{\displaystyle epd\leq 1}
is also sufficient.
== Asymmetric Lovász local lemma ==
A statement of the asymmetric version (which allows for events with different probability bounds) is as follows:
Lemma (asymmetric version). Let
A
=
{
A
1
,
…
,
A
n
}
{\displaystyle {\mathcal {A}}=\{A_{1},\ldots ,A_{n}\}}
be a finite set of events in the probability space Ω. For
A
∈
A
{\displaystyle A\in {\mathcal {A}}}
let
Γ
(
A
)
{\displaystyle \Gamma (A)}
denote the neighbours of
A
{\displaystyle A}
in the dependency graph (In the dependency graph, event
A
{\displaystyle A}
is not adjacent to events which are mutually independent). If there exists an assignment of reals
x
:
A
→
[
0
,
1
)
{\displaystyle x:{\mathcal {A}}\to [0,1)}
to the events such that
∀
A
∈
A
:
Pr
(
A
)
≤
x
(
A
)
∏
B
∈
Γ
(
A
)
(
1
−
x
(
B
)
)
{\displaystyle \forall A\in {\mathcal {A}}:\Pr(A)\leq x(A)\prod _{B\in \Gamma (A)}(1-x(B))}
then the probability of avoiding all events in
A
{\displaystyle {\mathcal {A}}}
is positive, in particular
Pr
(
A
1
¯
∧
⋯
∧
A
n
¯
)
≥
∏
i
∈
{
1
,
⋅
⋅
⋅
,
n
}
(
1
−
x
(
A
i
)
)
.
{\displaystyle \Pr \left({\overline {A_{1}}}\wedge \cdots \wedge {\overline {A_{n}}}\right)\geq \prod _{i\in \{1,\cdot \cdot \cdot ,n\}}(1-x(A_{i})).}
The symmetric version follows immediately from the asymmetric version by setting
∀
A
∈
A
:
x
(
A
)
=
1
d
+
1
{\displaystyle \forall A\in {\mathcal {A}}:x(A)={\frac {1}{d+1}}}
to get the sufficient condition
p
≤
1
d
+
1
⋅
1
e
{\displaystyle p\leq {\frac {1}{d+1}}\cdot {\frac {1}{e}}}
since
1
e
≤
(
1
−
1
d
+
1
)
d
.
{\displaystyle {\frac {1}{e}}\leq \left(1-{\frac {1}{d+1}}\right)^{d}.}
== Constructive versus non-constructive ==
As is often the case with probabilistic arguments, this theorem is nonconstructive and gives no method of determining an explicit element of the probability space in which no event occurs. However, algorithmic versions of the local lemma with stronger preconditions are also known (Beck 1991; Czumaj and Scheideler 2000). More recently, a constructive version of the local lemma was given by Robin Moser and Gábor Tardos requiring no stronger preconditions.
== Non-constructive proof ==
We prove the asymmetric version of the lemma, from which the symmetric version can be derived. By using the principle of mathematical induction we prove that for all
A
{\displaystyle A}
in
A
{\displaystyle {\mathcal {A}}}
and all subsets
S
{\displaystyle S}
of
A
{\displaystyle {\mathcal {A}}}
that do not include
A
{\displaystyle A}
,
Pr
(
A
∣
⋀
B
∈
S
B
¯
)
≤
x
(
A
)
{\displaystyle \Pr \left(A\mid \bigwedge _{B\in S}{\overline {B}}\right)\leq x(A)}
. The induction here is applied on the size (cardinality) of the set
S
{\displaystyle S}
. For base case
S
=
∅
{\displaystyle S=\emptyset }
the statement obviously holds since
Pr
(
A
i
)
≤
x
(
A
i
)
{\displaystyle \Pr(A_{i})\leq x\left(A_{i}\right)}
. We need to show that the inequality holds for any subset of
A
{\displaystyle {\mathcal {A}}}
of a certain cardinality given that it holds for all subsets of a lower cardinality.
Let
S
1
=
S
∩
Γ
(
A
)
,
S
2
=
S
∖
S
1
{\displaystyle S_{1}=S\cap \Gamma (A),S_{2}=S\setminus S_{1}}
. We have from Bayes' theorem
Pr
(
A
∣
⋀
B
∈
S
B
¯
)
=
Pr
(
A
∧
⋀
B
∈
S
1
B
¯
∣
⋀
B
∈
S
2
B
¯
)
Pr
(
⋀
B
∈
S
1
B
¯
∣
⋀
B
∈
S
2
B
¯
)
.
{\displaystyle \Pr \left(A\mid \bigwedge _{B\in S}{\overline {B}}\right)={\frac {\Pr \left(A\wedge \bigwedge _{B\in S_{1}}{\overline {B}}\mid \bigwedge _{B\in S_{2}}{\overline {B}}\right)}{\Pr \left(\bigwedge _{B\in S_{1}}{\overline {B}}\mid \bigwedge _{B\in S_{2}}{\overline {B}}\right)}}.}
We bound the numerator and denominator of the above expression separately. For this, let
S
1
=
{
B
j
1
,
B
j
2
,
…
,
B
j
l
}
{\displaystyle S_{1}=\{B_{j1},B_{j2},\ldots ,B_{jl}\}}
. First, exploiting the fact that
A
{\displaystyle A}
does not depend upon any event in
S
2
{\displaystyle S_{2}}
.
Numerator
≤
Pr
(
A
∣
⋀
B
∈
S
2
B
¯
)
=
Pr
(
A
)
≤
x
(
A
)
∏
B
∈
Γ
(
A
)
(
1
−
x
(
B
)
)
.
(
1
)
{\displaystyle {\text{Numerator}}\leq \Pr \left(A\mid \bigwedge _{B\in S_{2}}{\overline {B}}\right)=\Pr(A)\leq x(A)\prod _{B\in \Gamma (A)}(1-x(B)).\qquad (1)}
Expanding the denominator by using Bayes' theorem and then using the inductive assumption, we get
Denominator
=
Pr
(
B
¯
j
1
∣
⋀
t
=
2
l
B
¯
j
t
∧
⋀
B
∈
S
2
B
¯
)
⋅
Pr
(
B
¯
j
2
∣
⋀
t
=
3
l
B
¯
j
t
∧
⋀
B
∈
S
2
B
¯
)
⋯
Pr
(
B
¯
j
l
∣
⋀
B
∈
S
2
B
¯
)
≥
∏
B
∈
S
1
(
1
−
x
(
B
)
)
(
2
)
{\displaystyle {\begin{aligned}&{\text{Denominator}}\\={}&\Pr \left({\overline {B}}_{j1}\mid \bigwedge _{t=2}^{l}{\overline {B}}_{jt}\wedge \bigwedge _{B\in S_{2}}{\overline {B}}\right)\cdot \Pr \left({\overline {B}}_{j2}\mid \bigwedge _{t=3}^{l}{\overline {B}}_{jt}\wedge \bigwedge _{B\in S_{2}}{\overline {B}}\right)\cdots \Pr \left({\overline {B}}_{jl}\mid \bigwedge _{B\in S_{2}}{\overline {B}}\right)\geq \prod _{B\in S_{1}}(1-x(B))\qquad (2)\end{aligned}}}
The inductive assumption can be applied here since each event is conditioned on lesser number of other events, i.e. on a subset of cardinality less than
|
S
|
{\displaystyle |S|}
. From (1) and (2), we get
Pr
(
A
∣
⋀
B
∈
S
B
¯
)
≤
x
(
A
)
∏
B
∈
Γ
(
A
)
−
S
1
(
1
−
x
(
B
)
)
≤
x
(
A
)
{\displaystyle \Pr \left(A\mid \bigwedge _{B\in S}{\overline {B}}\right)\leq x(A)\prod _{B\in \Gamma (A)-S_{1}}(1-x(B))\leq x(A)}
Since the value of x is always in
[
0
,
1
)
{\displaystyle [0,1)}
. Note that we have essentially proved
Pr
(
A
¯
∣
⋀
B
∈
S
B
¯
)
≥
1
−
x
(
A
)
{\displaystyle \Pr \left({\overline {A}}\mid \bigwedge _{B\in S}{\overline {B}}\right)\geq 1-x(A)}
. To get the desired probability, we write it in terms of conditional probabilities applying Bayes' theorem repeatedly. Hence,
Pr
(
A
1
¯
∧
⋯
∧
A
n
¯
)
=
Pr
(
A
1
¯
∣
A
2
¯
∧
⋯
A
n
¯
)
⋅
Pr
(
A
2
¯
∣
A
3
¯
∧
⋯
A
n
¯
)
⋯
Pr
(
A
n
¯
)
≥
∏
A
∈
A
(
1
−
x
(
A
)
)
,
{\displaystyle {\begin{aligned}&\Pr \left({\overline {A_{1}}}\wedge \cdots \wedge {\overline {A_{n}}}\right)\\={}&\Pr \left({\overline {A_{1}}}\mid {\overline {A_{2}}}\wedge \cdots {\overline {A_{n}}}\right)\cdot \Pr \left({\overline {A_{2}}}\mid {\overline {A_{3}}}\wedge \cdots {\overline {A_{n}}}\right)\cdots \Pr \left({\overline {A_{n}}}\right)\\\geq {}&\prod _{A\in {\mathcal {A}}}(1-x(A)),\end{aligned}}}
which is what we had intended to prove.
== Example ==
Suppose 11n points are placed around a circle and colored with n different colors in such a way that each color is applied to exactly 11 points. In any such coloring, there must be a set of n points containing one point of each color but not containing any pair of adjacent points.
To see this, imagine picking a point of each color randomly, with all points equally likely (i.e., having probability 1/11) to be chosen. The 11n different events we want to avoid correspond to the 11n pairs of adjacent points on the circle. For each pair our chance of picking both points in that pair is at most 1/121 (exactly 1/121 if the two points are of different colors, otherwise 0), so we will take p = 1/121.
Whether a given pair (a, b) of points is chosen depends only on what happens in the colors of a and b, and not at all on whether any other collection of points in the other n − 2 colors are chosen. This implies the event "a and b are both chosen" is dependent only on those pairs of adjacent points which share a color either with a or with b.
There are 11 points on the circle sharing a color with a (including a itself), each of which is involved with 2 pairs. This means there are 21 pairs other than (a, b) which include the same color as a, and the same holds true for b. The worst that can happen is that these two sets are disjoint, so we can take d = 42 in the lemma. This gives
e
p
(
d
+
1
)
≈
0.966
<
1.
{\displaystyle ep(d+1)\approx 0.966<1.}
By the local lemma, there is a positive probability that none of the bad events occur, meaning that our set contains no pair of adjacent points. This implies that a set satisfying our conditions must exist.
== See also ==
Shearer's inequality
== Notes ==
== References ==
Alon, Noga; Spencer, Joel H. (2000). The probabilistic method (2nd ed.). New York: Wiley-Interscience. ISBN 0-471-37046-0.
Beck, J. (1991). "An algorithmic approach to the Lovász local lemma, I". Random Structures and Algorithms. 2 (4): 343–365. doi:10.1002/rsa.3240020402.
Czumaj, Artur; Scheideler, Christian (2000). "Coloring nonuniform hypergraphs: A new algorithmic approach to the general Lovász local lemma". Random Structures & Algorithms. 17 (3–4): 213–237. doi:10.1002/1098-2418(200010/12)17:3/4<213::AID-RSA3>3.0.CO;2-Y.
Erdős, Paul; Lovász, László (1975). "Problems and results on 3-chromatic hypergraphs and some related questions" (PDF). In A. Hajnal; R. Rado; V. T. Sós (eds.). Infinite and Finite Sets (to Paul Erdős on his 60th birthday). Vol. II. North-Holland. pp. 609–627.
Moser, Robin A. (2008). "A constructive proof of the Lovasz Local Lemma". arXiv:0810.4812 [cs.DS]. | Wikipedia/Lovász_local_lemma |
For statistics in probability theory, the Boolean-Poisson model or simply Boolean model for a random subset of the plane (or higher dimensions, analogously) is one of the simplest and most tractable models in stochastic geometry. Take a Poisson point process of rate
λ
{\displaystyle \lambda }
in the plane and make each point be the center of a random set; the resulting union of overlapping sets is a realization of the Boolean model
B
{\displaystyle {\mathcal {B}}}
. More precisely, the parameters are
λ
{\displaystyle \lambda }
and a probability distribution on compact sets; for each point
ξ
{\displaystyle \xi }
of the Poisson point process we pick a set
C
ξ
{\displaystyle C_{\xi }}
from the distribution, and then define
B
{\displaystyle {\mathcal {B}}}
as the union
∪
ξ
(
ξ
+
C
ξ
)
{\displaystyle \cup _{\xi }(\xi +C_{\xi })}
of translated sets.
To illustrate tractability with one simple formula, the mean density of
B
{\displaystyle {\mathcal {B}}}
equals
1
−
exp
(
−
λ
A
)
{\displaystyle 1-\exp(-\lambda A)}
where
Γ
{\displaystyle \Gamma }
denotes the area of
C
ξ
{\displaystyle C_{\xi }}
and
A
=
E
(
Γ
)
.
{\displaystyle A=\operatorname {E} (\Gamma ).}
The classical theory of stochastic geometry develops many further formulae.
As related topics, the case of constant-sized discs is the basic model of continuum percolation
and the low-density Boolean models serve as a first-order approximations in the
study of extremes in many models.
== References == | Wikipedia/Boolean_model_(probability_theory) |
In statistics and statistical physics, the Metropolis–Hastings algorithm is a Markov chain Monte Carlo (MCMC) method for obtaining a sequence of random samples from a probability distribution from which direct sampling is difficult. New samples are added to the sequence in two steps: first a new sample is proposed based on the previous sample, then the proposed sample is either added to the sequence or rejected depending on the value of the probability distribution at that point. The resulting sequence can be used to approximate the distribution (e.g. to generate a histogram) or to compute an integral (e.g. an expected value).
Metropolis–Hastings and other MCMC algorithms are generally used for sampling from multi-dimensional distributions, especially when the number of dimensions is high. For single-dimensional distributions, there are usually other methods (e.g. adaptive rejection sampling) that can directly return independent samples from the distribution, and these are free from the problem of autocorrelated samples that is inherent in MCMC methods.
== History ==
The algorithm is named in part for Nicholas Metropolis, the first coauthor of a 1953 paper, entitled Equation of State Calculations by Fast Computing Machines, with Arianna W. Rosenbluth, Marshall Rosenbluth, Augusta H. Teller and Edward Teller. For many years the algorithm was known simply as the Metropolis algorithm. The paper proposed the algorithm for the case of symmetrical proposal distributions, but in 1970, W.K. Hastings extended it to the more general case. The generalized method was eventually identified by both names, although the first use of the term "Metropolis-Hastings algorithm" is unclear.
Some controversy exists with regard to credit for development of the Metropolis algorithm. Metropolis, who was familiar with the computational aspects of the method, had coined the term "Monte Carlo" in an earlier article with Stanisław Ulam, and led the group in the Theoretical Division that designed and built the MANIAC I computer used in the experiments in 1952. However, prior to 2003 there was no detailed account of the algorithm's development. Shortly before his death, Marshall Rosenbluth attended a 2003 conference at LANL marking the 50th anniversary of the 1953 publication. At this conference, Rosenbluth described the algorithm and its development in a presentation titled "Genesis of the Monte Carlo Algorithm for Statistical Mechanics". Further historical clarification is made by Gubernatis in a 2005 journal article recounting the 50th anniversary conference. Rosenbluth makes it clear that he and his wife Arianna did the work, and that Metropolis played no role in the development other than providing computer time.
This contradicts an account by Edward Teller, who states in his memoirs that the five authors of the 1953 article worked together for "days (and nights)". In contrast, the detailed account by Rosenbluth credits Teller with a crucial but early suggestion to "take advantage of statistical mechanics and take ensemble averages instead of following detailed kinematics". This, says Rosenbluth, started him thinking about the generalized Monte Carlo approach – a topic which he says he had discussed often with John Von Neumann. Arianna Rosenbluth recounted (to Gubernatis in 2003) that Augusta Teller started the computer work, but that Arianna herself took it over and wrote the code from scratch. In an oral history recorded shortly before his death, Rosenbluth again credits Teller with posing the original problem, himself with solving it, and Arianna with programming the computer.
== Description ==
The Metropolis–Hastings algorithm can draw samples from any probability distribution with probability density
P
(
x
)
{\displaystyle P(x)}
, provided that we know a function
f
(
x
)
{\displaystyle f(x)}
proportional to the density
P
{\displaystyle P}
and the values of
f
(
x
)
{\displaystyle f(x)}
can be calculated. The requirement that
f
(
x
)
{\displaystyle f(x)}
must only be proportional to the density, rather than exactly equal to it, makes the Metropolis–Hastings algorithm particularly useful, because it removes the need to calculate the density's normalization factor, which is often extremely difficult in practice.
The Metropolis–Hastings algorithm generates a sequence of sample values in such a way that, as more and more sample values are produced, the distribution of values more closely approximates the desired distribution. These sample values are produced iteratively in such a way, that the distribution of the next sample depends only on the current sample value, which makes the sequence of samples a Markov chain. Specifically, at each iteration, the algorithm proposes a candidate for the next sample value based on the current sample value. Then, with some probability, the candidate is either accepted, in which case the candidate value is used in the next iteration, or it is rejected in which case the candidate value is discarded, and the current value is reused in the next iteration. The probability of acceptance is determined by comparing the values of the function
f
(
x
)
{\displaystyle f(x)}
of the current and candidate sample values with respect to the desired distribution.
The method used to propose new candidates is characterized by the probability distribution
g
(
x
∣
y
)
{\displaystyle g(x\mid y)}
(sometimes written
Q
(
x
∣
y
)
{\displaystyle Q(x\mid y)}
) of a new proposed sample
x
{\displaystyle x}
given the previous sample
y
{\displaystyle y}
. This is called the proposal density, proposal function, or jumping distribution. A common choice for
g
(
x
∣
y
)
{\displaystyle g(x\mid y)}
is a Gaussian distribution centered at
y
{\displaystyle y}
, so that points closer to
y
{\displaystyle y}
are more likely to be visited next, making the sequence of samples into a Gaussian random walk. In the original paper by Metropolis et al. (1953),
g
(
x
∣
y
)
{\displaystyle g(x\mid y)}
was suggested to be a uniform distribution limited to some maximum distance from
y
{\displaystyle y}
. More complicated proposal functions are also possible, such as those of Hamiltonian Monte Carlo, Langevin Monte Carlo, or preconditioned Crank–Nicolson.
For the purpose of illustration, the Metropolis algorithm, a special case of the Metropolis–Hastings algorithm where the proposal function is symmetric, is described below.
Metropolis algorithm (symmetric proposal distribution)
Let
f
(
x
)
{\displaystyle f(x)}
be a function that is proportional to the desired probability density function
P
(
x
)
{\displaystyle P(x)}
(a.k.a. a target distribution).
Initialization: Choose an arbitrary point
x
t
{\displaystyle x_{t}}
to be the first observation in the sample and choose a proposal function
g
(
x
∣
y
)
{\displaystyle g(x\mid y)}
. In this section,
g
{\displaystyle g}
is assumed to be symmetric; in other words, it must satisfy
g
(
x
∣
y
)
=
g
(
y
∣
x
)
{\displaystyle g(x\mid y)=g(y\mid x)}
.
For each iteration t:
Propose a candidate
x
′
{\displaystyle x'}
for the next sample by picking from the distribution
g
(
x
′
∣
x
t
)
{\displaystyle g(x'\mid x_{t})}
.
Calculate the acceptance ratio
α
=
f
(
x
′
)
/
f
(
x
t
)
{\displaystyle \alpha =f(x')/f(x_{t})}
, which will be used to decide whether to accept or reject the candidate. Because f is proportional to the density of P, we have that
α
=
f
(
x
′
)
/
f
(
x
t
)
=
P
(
x
′
)
/
P
(
x
t
)
{\displaystyle \alpha =f(x')/f(x_{t})=P(x')/P(x_{t})}
.
Accept or reject:
Generate a uniform random number
u
∈
[
0
,
1
]
{\displaystyle u\in [0,1]}
.
If
u
≤
α
{\displaystyle u\leq \alpha }
, then accept the candidate by setting
x
t
+
1
=
x
′
{\displaystyle x_{t+1}=x'}
,
If
u
>
α
{\displaystyle u>\alpha }
, then reject the candidate and set
x
t
+
1
=
x
t
{\displaystyle x_{t+1}=x_{t}}
instead.
This algorithm proceeds by randomly attempting to move about the sample space, sometimes accepting the moves and sometimes remaining in place.
P
(
x
)
{\displaystyle P(x)}
at specific point
x
{\displaystyle x}
is proportional to the iterations spent on the point by the algorithm. Note that the acceptance ratio
α
{\displaystyle \alpha }
indicates how probable the new proposed sample is with respect to the current sample, according to the distribution whose density is
P
(
x
)
{\displaystyle P(x)}
. If we attempt to move to a point that is more probable than the existing point (i.e. a point in a higher-density region of
P
(
x
)
{\displaystyle P(x)}
corresponding to an
α
>
1
≥
u
{\displaystyle \alpha >1\geq u}
), we will always accept the move. However, if we attempt to move to a less probable point, we will sometimes reject the move, and the larger the relative drop in probability, the more likely we are to reject the new point. Thus, we will tend to stay in (and return large numbers of samples from) high-density regions of
P
(
x
)
{\displaystyle P(x)}
, while only occasionally visiting low-density regions. Intuitively, this is why this algorithm works and returns samples that follow the desired distribution with density
P
(
x
)
{\displaystyle P(x)}
.
Compared with an algorithm like adaptive rejection sampling that directly generates independent samples from a distribution, Metropolis–Hastings and other MCMC algorithms have a number of disadvantages:
The samples are autocorrelated. Even though over the long term they do correctly follow
P
(
x
)
{\displaystyle P(x)}
, a set of nearby samples will be correlated with each other and not correctly reflect the distribution. This means that effective sample sizes can be significantly lower than the number of samples actually taken, leading to large errors.
Although the Markov chain eventually converges to the desired distribution, the initial samples may follow a very different distribution, especially if the starting point is in a region of low density. As a result, a burn-in period is typically necessary, where an initial number of samples are thrown away.
On the other hand, most simple rejection sampling methods suffer from the "curse of dimensionality", where the probability of rejection increases exponentially as a function of the number of dimensions. Metropolis–Hastings, along with other MCMC methods, do not have this problem to such a degree, and thus are often the only solutions available when the number of dimensions of the distribution to be sampled is high. As a result, MCMC methods are often the methods of choice for producing samples from hierarchical Bayesian models and other high-dimensional statistical models used nowadays in many disciplines.
In multivariate distributions, the classic Metropolis–Hastings algorithm as described above involves choosing a new multi-dimensional sample point. When the number of dimensions is high, finding the suitable jumping distribution to use can be difficult, as the different individual dimensions behave in very different ways, and the jumping width (see above) must be "just right" for all dimensions at once to avoid excessively slow mixing. An alternative approach that often works better in such situations, known as Gibbs sampling, involves choosing a new sample for each dimension separately from the others, rather than choosing a sample for all dimensions at once. That way, the problem of sampling from potentially high-dimensional space will be reduced to a collection of problems to sample from small dimensionality. This is especially applicable when the multivariate distribution is composed of a set of individual random variables in which each variable is conditioned on only a small number of other variables, as is the case in most typical hierarchical models. The individual variables are then sampled one at a time, with each variable conditioned on the most recent values of all the others. Various algorithms can be used to choose these individual samples, depending on the exact form of the multivariate distribution: some possibilities are the adaptive rejection sampling methods, the adaptive rejection Metropolis sampling algorithm, a simple one-dimensional Metropolis–Hastings step, or slice sampling.
== Formal derivation ==
The purpose of the Metropolis–Hastings algorithm is to generate a collection of states according to a desired distribution
P
(
x
)
{\displaystyle P(x)}
. To accomplish this, the algorithm uses a Markov process, which asymptotically reaches a unique stationary distribution
π
(
x
)
{\displaystyle \pi (x)}
such that
π
(
x
)
=
P
(
x
)
{\displaystyle \pi (x)=P(x)}
.
A Markov process is uniquely defined by its transition probabilities
P
(
x
′
∣
x
)
{\displaystyle P(x'\mid x)}
, the probability of transitioning from any given state
x
{\displaystyle x}
to any other given state
x
′
{\displaystyle x'}
. It has a unique stationary distribution
π
(
x
)
{\displaystyle \pi (x)}
when the following two conditions are met:
Existence of stationary distribution: there must exist a stationary distribution
π
(
x
)
{\displaystyle \pi (x)}
. A sufficient but not necessary condition is detailed balance, which requires that each transition
x
→
x
′
{\displaystyle x\to x'}
is reversible: for every pair of states
x
,
x
′
{\displaystyle x,x'}
, the probability of being in state
x
{\displaystyle x}
and transitioning to state
x
′
{\displaystyle x'}
must be equal to the probability of being in state
x
′
{\displaystyle x'}
and transitioning to state
x
{\displaystyle x}
,
π
(
x
)
P
(
x
′
∣
x
)
=
π
(
x
′
)
P
(
x
∣
x
′
)
{\displaystyle \pi (x)P(x'\mid x)=\pi (x')P(x\mid x')}
.
Uniqueness of stationary distribution: the stationary distribution
π
(
x
)
{\displaystyle \pi (x)}
must be unique. This is guaranteed by ergodicity of the Markov process, which requires that every state must (1) be aperiodic—the system does not return to the same state at fixed intervals; and (2) be positive recurrent—the expected number of steps for returning to the same state is finite.
The Metropolis–Hastings algorithm involves designing a Markov process (by constructing transition probabilities) that fulfills the two above conditions, such that its stationary distribution
π
(
x
)
{\displaystyle \pi (x)}
is chosen to be
P
(
x
)
{\displaystyle P(x)}
. The derivation of the algorithm starts with the condition of detailed balance:
P
(
x
′
∣
x
)
P
(
x
)
=
P
(
x
∣
x
′
)
P
(
x
′
)
,
{\displaystyle P(x'\mid x)P(x)=P(x\mid x')P(x'),}
which is re-written as
P
(
x
′
∣
x
)
P
(
x
∣
x
′
)
=
P
(
x
′
)
P
(
x
)
.
{\displaystyle {\frac {P(x'\mid x)}{P(x\mid x')}}={\frac {P(x')}{P(x)}}.}
The approach is to separate the transition in two sub-steps; the proposal and the acceptance-rejection. The proposal distribution
g
(
x
′
∣
x
)
{\displaystyle g(x'\mid x)}
is the conditional probability of proposing a state
x
′
{\displaystyle x'}
given
x
{\displaystyle x}
, and the acceptance distribution
A
(
x
′
,
x
)
{\displaystyle A(x',x)}
is the probability to accept the proposed state
x
′
{\displaystyle x'}
. The transition probability can be written as the product of them:
P
(
x
′
∣
x
)
=
g
(
x
′
∣
x
)
A
(
x
′
,
x
)
.
{\displaystyle P(x'\mid x)=g(x'\mid x)A(x',x).}
Inserting this relation in the previous equation, we have
A
(
x
′
,
x
)
A
(
x
,
x
′
)
=
P
(
x
′
)
P
(
x
)
g
(
x
∣
x
′
)
g
(
x
′
∣
x
)
.
{\displaystyle {\frac {A(x',x)}{A(x,x')}}={\frac {P(x')}{P(x)}}{\frac {g(x\mid x')}{g(x'\mid x)}}.}
The next step in the derivation is to choose an acceptance ratio that fulfills the condition above. One common choice is the Metropolis choice:
A
(
x
′
,
x
)
=
min
(
1
,
P
(
x
′
)
P
(
x
)
g
(
x
∣
x
′
)
g
(
x
′
∣
x
)
)
.
{\displaystyle A(x',x)=\min \left(1,{\frac {P(x')}{P(x)}}{\frac {g(x\mid x')}{g(x'\mid x)}}\right).}
For this Metropolis acceptance ratio
A
{\displaystyle A}
, either
A
(
x
′
,
x
)
=
1
{\displaystyle A(x',x)=1}
or
A
(
x
,
x
′
)
=
1
{\displaystyle A(x,x')=1}
and, either way, the condition is satisfied.
The Metropolis–Hastings algorithm can thus be written as follows:
Initialise
Pick an initial state
x
0
{\displaystyle x_{0}}
.
Set
t
=
0
{\displaystyle t=0}
.
Iterate
Generate a random candidate state
x
′
{\displaystyle x'}
according to
g
(
x
′
∣
x
t
)
{\displaystyle g(x'\mid x_{t})}
.
Calculate the acceptance probability
A
(
x
′
,
x
t
)
=
min
(
1
,
P
(
x
′
)
P
(
x
t
)
g
(
x
t
∣
x
′
)
g
(
x
′
∣
x
t
)
)
{\displaystyle A(x',x_{t})=\min \left(1,{\frac {P(x')}{P(x_{t})}}{\frac {g(x_{t}\mid x')}{g(x'\mid x_{t})}}\right)}
.
Accept or reject:
generate a uniform random number
u
∈
[
0
,
1
]
{\displaystyle u\in [0,1]}
;
if
u
≤
A
(
x
′
,
x
t
)
{\displaystyle u\leq A(x',x_{t})}
, then accept the new state and set
x
t
+
1
=
x
′
{\displaystyle x_{t+1}=x'}
;
if
u
>
A
(
x
′
,
x
t
)
{\displaystyle u>A(x',x_{t})}
, then reject the new state, and copy the old state forward
x
t
+
1
=
x
t
{\displaystyle x_{t+1}=x_{t}}
.
Increment: set
t
=
t
+
1
{\displaystyle t=t+1}
.
Provided that specified conditions are met, the empirical distribution of saved states
x
0
,
…
,
x
T
{\displaystyle x_{0},\ldots ,x_{T}}
will approach
P
(
x
)
{\displaystyle P(x)}
. The number of iterations (
T
{\displaystyle T}
) required to effectively estimate
P
(
x
)
{\displaystyle P(x)}
depends on the number of factors, including the relationship between
P
(
x
)
{\displaystyle P(x)}
and the proposal distribution and the desired accuracy of estimation. For distribution on discrete state spaces, it has to be of the order of the autocorrelation time of the Markov process.
It is important to notice that it is not clear, in a general problem, which distribution
g
(
x
′
∣
x
)
{\displaystyle g(x'\mid x)}
one should use or the number of iterations necessary for proper estimation; both are free parameters of the method, which must be adjusted to the particular problem in hand.
== Use in numerical integration ==
A common use of Metropolis–Hastings algorithm is to compute an integral. Specifically, consider a space
Ω
⊂
R
{\displaystyle \Omega \subset \mathbb {R} }
and a probability distribution
P
(
x
)
{\displaystyle P(x)}
over
Ω
{\displaystyle \Omega }
,
x
∈
Ω
{\displaystyle x\in \Omega }
. Metropolis–Hastings can estimate an integral of the form of
P
(
E
)
=
∫
Ω
A
(
x
)
P
(
x
)
d
x
,
{\displaystyle P(E)=\int _{\Omega }A(x)P(x)\,dx,}
where
A
(
x
)
{\displaystyle A(x)}
is a (measurable) function of interest.
For example, consider a statistic
E
(
x
)
{\displaystyle E(x)}
and its probability distribution
P
(
E
)
{\displaystyle P(E)}
, which is a marginal distribution. Suppose that the goal is to estimate
P
(
E
)
{\displaystyle P(E)}
for
E
{\displaystyle E}
on the tail of
P
(
E
)
{\displaystyle P(E)}
. Formally,
P
(
E
)
{\displaystyle P(E)}
can be written as
P
(
E
)
=
∫
Ω
P
(
E
∣
x
)
P
(
x
)
d
x
=
∫
Ω
δ
(
E
−
E
(
x
)
)
P
(
x
)
d
x
=
E
(
P
(
E
∣
X
)
)
{\displaystyle P(E)=\int _{\Omega }P(E\mid x)P(x)\,dx=\int _{\Omega }\delta {\big (}E-E(x){\big )}P(x)\,dx=E{\big (}P(E\mid X){\big )}}
and, thus, estimating
P
(
E
)
{\displaystyle P(E)}
can be accomplished by estimating the expected value of the indicator function
A
E
(
x
)
≡
1
E
(
x
)
{\displaystyle A_{E}(x)\equiv \mathbf {1} _{E}(x)}
, which is 1 when
E
(
x
)
∈
[
E
,
E
+
Δ
E
]
{\displaystyle E(x)\in [E,E+\Delta E]}
and zero otherwise.
Because
E
{\displaystyle E}
is on the tail of
P
(
E
)
{\displaystyle P(E)}
, the probability to draw a state
x
{\displaystyle x}
with
E
(
x
)
{\displaystyle E(x)}
on the tail of
P
(
E
)
{\displaystyle P(E)}
is proportional to
P
(
E
)
{\displaystyle P(E)}
, which is small by definition. The Metropolis–Hastings algorithm can be used here to sample (rare) states more likely and thus increase the number of samples used to estimate
P
(
E
)
{\displaystyle P(E)}
on the tails. This can be done e.g. by using a sampling distribution
π
(
x
)
{\displaystyle \pi (x)}
to favor those states (e.g.
π
(
x
)
∝
e
a
E
{\displaystyle \pi (x)\propto e^{aE}}
with
a
>
0
{\displaystyle a>0}
).
== Step-by-step instructions ==
Suppose that the most recent value sampled is
x
t
{\displaystyle x_{t}}
. To follow the Metropolis–Hastings algorithm, we next draw a new proposal state
x
′
{\displaystyle x'}
with probability density
g
(
x
′
∣
x
t
)
{\displaystyle g(x'\mid x_{t})}
and calculate a value
a
=
a
1
a
2
,
{\displaystyle a=a_{1}a_{2},}
where
a
1
=
P
(
x
′
)
P
(
x
t
)
{\displaystyle a_{1}={\frac {P(x')}{P(x_{t})}}}
is the probability (e.g., Bayesian posterior) ratio between the proposed sample
x
′
{\displaystyle x'}
and the previous sample
x
t
{\displaystyle x_{t}}
, and
a
2
=
g
(
x
t
∣
x
′
)
g
(
x
′
∣
x
t
)
{\displaystyle a_{2}={\frac {g(x_{t}\mid x')}{g(x'\mid x_{t})}}}
is the ratio of the proposal density in two directions (from
x
t
{\displaystyle x_{t}}
to
x
′
{\displaystyle x'}
and conversely).
This is equal to 1 if the proposal density is symmetric.
Then the new state
x
t
+
1
{\displaystyle x_{t+1}}
is chosen according to the following rules.
If
a
≥
1
:
{\displaystyle a\geq 1{:}}
x
t
+
1
=
x
′
,
{\displaystyle x_{t+1}=x',}
else:
x
t
+
1
=
{
x
′
with probability
a
,
x
t
with probability
1
−
a
.
{\displaystyle x_{t+1}={\begin{cases}x'&{\text{with probability }}a,\\x_{t}&{\text{with probability }}1-a.\end{cases}}}
The Markov chain is started from an arbitrary initial value
x
0
{\displaystyle x_{0}}
, and the algorithm is run for many iterations until this initial state is "forgotten". These samples, which are discarded, are known as burn-in. The remaining set of accepted values of
x
{\displaystyle x}
represent a sample from the distribution
P
(
x
)
{\displaystyle P(x)}
.
The algorithm works best if the proposal density matches the shape of the target distribution
P
(
x
)
{\displaystyle P(x)}
, from which direct sampling is difficult, that is
g
(
x
′
∣
x
t
)
≈
P
(
x
′
)
{\displaystyle g(x'\mid x_{t})\approx P(x')}
.
If a Gaussian proposal density
g
{\displaystyle g}
is used, the variance parameter
σ
2
{\displaystyle \sigma ^{2}}
has to be tuned during the burn-in period.
This is usually done by calculating the acceptance rate, which is the fraction of proposed samples that is accepted in a window of the last
N
{\displaystyle N}
samples.
The desired acceptance rate depends on the target distribution, however it has been shown theoretically that the ideal acceptance rate for a one-dimensional Gaussian distribution is about 50%, decreasing to about 23% for an
N
{\displaystyle N}
-dimensional Gaussian target distribution. These guidelines can work well when sampling from sufficiently regular Bayesian posteriors as they often follow a multivariate normal distribution as can be established using the Bernstein–von Mises theorem.
If
σ
2
{\displaystyle \sigma ^{2}}
is too small, the chain will mix slowly (i.e., the acceptance rate will be high, but successive samples will move around the space slowly, and the chain will converge only slowly to
P
(
x
)
{\displaystyle P(x)}
). On the other hand,
if
σ
2
{\displaystyle \sigma ^{2}}
is too large, the acceptance rate will be very low because the proposals are likely to land in regions of much lower probability density, so
a
1
{\displaystyle a_{1}}
will be very small, and again the chain will converge very slowly. One typically tunes the proposal distribution so that the algorithms accepts on the order of 30% of all samples – in line with the theoretical estimates mentioned in the previous paragraph.
== Bayesian Inference ==
MCMC can be used to draw samples from the posterior distribution of a statistical model.
The acceptance probability is given by:
P
a
c
c
(
θ
i
→
θ
∗
)
=
min
(
1
,
L
(
y
|
θ
∗
)
P
(
θ
∗
)
L
(
y
|
θ
i
)
P
(
θ
i
)
Q
(
θ
i
|
θ
∗
)
Q
(
θ
∗
|
θ
i
)
)
,
{\displaystyle P_{acc}(\theta _{i}\to \theta ^{*})=\min \left(1,{\frac {{\mathcal {L}}(y|\theta ^{*})P(\theta ^{*})}{{\mathcal {L}}(y|\theta _{i})P(\theta _{i})}}{\frac {Q(\theta _{i}|\theta ^{*})}{Q(\theta ^{*}|\theta _{i})}}\right),}
where
L
{\displaystyle {\mathcal {L}}}
is the likelihood,
P
(
θ
)
{\displaystyle P(\theta )}
the prior probability density and
Q
{\displaystyle Q}
the (conditional) proposal probability.
== See also ==
Genetic algorithms
Mean-field particle methods
Metropolis light transport
Multiple-try Metropolis
Parallel tempering
Sequential Monte Carlo
Simulated annealing
== References ==
== Notes ==
== Further reading ==
Bernd A. Berg. Markov Chain Monte Carlo Simulations and Their Statistical Analysis. Singapore, World Scientific, 2004.
Chib, Siddhartha; Greenberg, Edward (1995). "Understanding the Metropolis–Hastings Algorithm". The American Statistician, 49(4), 327–335.
David D. L. Minh and Do Le Minh. "Understanding the Hastings Algorithm." Communications in Statistics - Simulation and Computation, 44:2 332–349, 2015
Bolstad, William M. (2010) Understanding Computational Bayesian Statistics, John Wiley & Sons ISBN 0-470-04609-0 | Wikipedia/Metropolis_algorithm |
In probability theory and statistics, the cumulants κn of a probability distribution are a set of quantities that provide an alternative to the moments of the distribution. Any two probability distributions whose moments are identical will have identical cumulants as well, and vice versa.
The first cumulant is the mean, the second cumulant is the variance, and the third cumulant is the same as the third central moment. But fourth and higher-order cumulants are not equal to central moments. In some cases theoretical treatments of problems in terms of cumulants are simpler than those using moments. In particular, when two or more random variables are statistically independent, the nth-order cumulant of their sum is equal to the sum of their nth-order cumulants. As well, the third and higher-order cumulants of a normal distribution are zero, and it is the only distribution with this property.
Just as for moments, where joint moments are used for collections of random variables, it is possible to define joint cumulants.
== Definition ==
The cumulants of a random variable X are defined using the cumulant-generating function K(t), which is the natural logarithm of the moment-generating function:
K
(
t
)
=
log
E
[
e
t
X
]
.
{\displaystyle K(t)=\log \operatorname {E} \left[e^{tX}\right].}
The cumulants κn are obtained from a power series expansion of the cumulant generating function:
K
(
t
)
=
∑
n
=
1
∞
κ
n
t
n
n
!
=
κ
1
t
1
!
+
κ
2
t
2
2
!
+
κ
3
t
3
3
!
+
⋯
=
μ
t
+
σ
2
t
2
2
+
⋯
.
{\displaystyle K(t)=\sum _{n=1}^{\infty }\kappa _{n}{\frac {t^{n}}{n!}}=\kappa _{1}{\frac {t}{1!}}+\kappa _{2}{\frac {t^{2}}{2!}}+\kappa _{3}{\frac {t^{3}}{3!}}+\cdots =\mu t+\sigma ^{2}{\frac {t^{2}}{2}}+\cdots .}
This expansion is a Maclaurin series, so the nth cumulant can be obtained by differentiating the above expansion n times and evaluating the result at zero:
κ
n
=
K
(
n
)
(
0
)
.
{\displaystyle \kappa _{n}=K^{(n)}(0).}
If the moment-generating function does not exist, the cumulants can be defined in terms of the relationship between cumulants and moments discussed later.
=== Alternative definition of the cumulant generating function ===
Some writers prefer to define the cumulant-generating function as the natural logarithm of the characteristic function, which is sometimes also called the second characteristic function,
H
(
t
)
=
log
E
[
e
i
t
X
]
=
∑
n
=
1
∞
κ
n
(
i
t
)
n
n
!
=
μ
i
t
−
σ
2
t
2
2
+
⋯
{\displaystyle H(t)=\log \operatorname {E} \left[e^{itX}\right]=\sum _{n=1}^{\infty }\kappa _{n}{\frac {(it)^{n}}{n!}}=\mu it-\sigma ^{2}{\frac {t^{2}}{2}}+\cdots }
An advantage of H(t) — in some sense the function K(t) evaluated for purely imaginary arguments — is that E[eitX] is well defined for all real values of t even when E[etX] is not well defined for all real values of t, such as can occur when there is "too much" probability that X has a large magnitude. Although the function H(t) will be well defined, it will nonetheless mimic K(t) in terms of the length of its Maclaurin series, which may not extend beyond (or, rarely, even to) linear order in the argument t, and in particular the number of cumulants that are well defined will not change. Nevertheless, even when H(t) does not have a long Maclaurin series, it can be used directly in analyzing and, particularly, adding random variables. Both the Cauchy distribution (also called the Lorentzian) and more generally, stable distributions (related to the Lévy distribution) are examples of distributions for which the power-series expansions of the generating functions have only finitely many well-defined terms.
== Some basic properties ==
The
n
{\textstyle n}
th cumulant
κ
n
(
X
)
{\textstyle \kappa _{n}(X)}
of (the distribution of) a random variable
X
{\textstyle X}
enjoys the following properties:
If
n
>
1
{\textstyle n>1}
and
c
{\textstyle c}
is constant (i.e. not random) then
κ
n
(
X
+
c
)
=
κ
n
(
X
)
,
{\textstyle \kappa _{n}(X+c)=\kappa _{n}(X),}
i.e. the cumulant is translation invariant. (If
n
=
1
{\textstyle n=1}
then we have
κ
1
(
X
+
c
)
=
κ
1
(
X
)
+
c
.
)
{\textstyle \kappa _{1}(X+c)=\kappa _{1}(X)+c.)}
If
c
{\textstyle c}
is constant (i.e. not random) then
κ
n
(
c
X
)
=
c
n
κ
n
(
X
)
,
{\textstyle \kappa _{n}(cX)=c^{n}\kappa _{n}(X),}
i.e. the
n
{\textstyle n}
th cumulant is homogeneous of degree
n
{\textstyle n}
.
If random variables
X
1
,
…
,
X
m
{\textstyle X_{1},\ldots ,X_{m}}
are independent then
κ
n
(
X
1
+
⋯
+
X
m
)
=
κ
n
(
X
1
)
+
⋯
+
κ
n
(
X
m
)
.
{\displaystyle \kappa _{n}(X_{1}+\cdots +X_{m})=\kappa _{n}(X_{1})+\cdots +\kappa _{n}(X_{m})\,.}
That is, the cumulant is cumulative — hence the name.
The cumulative property follows quickly by considering the cumulant-generating function:
K
X
1
+
⋯
+
X
m
(
t
)
=
log
E
[
e
t
(
X
1
+
⋯
+
X
m
)
]
=
log
(
E
[
e
t
X
1
]
⋯
E
[
e
t
X
m
]
)
=
log
E
[
e
t
X
1
]
+
⋯
+
log
E
[
e
t
X
m
]
=
K
X
1
(
t
)
+
⋯
+
K
X
m
(
t
)
,
{\displaystyle {\begin{aligned}K_{X_{1}+\cdots +X_{m}}(t)&=\log \operatorname {E} \left[e^{t(X_{1}+\cdots +X_{m})}\right]\\[5pt]&=\log \left(\operatorname {E} \left[e^{tX_{1}}\right]\cdots \operatorname {E} \left[e^{tX_{m}}\right]\right)\\[5pt]&=\log \operatorname {E} \left[e^{tX_{1}}\right]+\cdots +\log \operatorname {E} \left[e^{tX_{m}}\right]\\[5pt]&=K_{X_{1}}(t)+\cdots +K_{X_{m}}(t),\end{aligned}}}
so that each cumulant of a sum of independent random variables is the sum of the corresponding cumulants of the addends. That is, when the addends are statistically independent, the mean of the sum is the sum of the means, the variance of the sum is the sum of the variances, the third cumulant (which happens to be the third central moment) of the sum is the sum of the third cumulants, and so on for each order of cumulant.
A distribution with given cumulants κn can be approximated through an Edgeworth series.
=== First several cumulants as functions of the moments ===
All of the higher cumulants are polynomial functions of the central moments, with integer coefficients, but only in degrees 2 and 3 are the cumulants actually central moments.
κ
1
(
X
)
=
E
[
X
]
=
{\textstyle \kappa _{1}(X)=\operatorname {E} [X]={}}
mean
κ
2
(
X
)
=
var
(
X
)
=
E
[
(
X
−
E
[
X
]
)
2
]
=
{\textstyle \kappa _{2}(X)=\operatorname {var} (X)=\operatorname {E} \left[{\left(X-\operatorname {E} [X]\right)}^{2}\right]={}}
the variance, or second central moment.
κ
3
(
X
)
=
E
[
(
X
−
E
(
X
)
)
3
]
=
{\textstyle \kappa _{3}(X)=\operatorname {E} \left[{\left(X-\operatorname {E} (X)\right)}^{3}\right]={}}
the third central moment.
κ
4
(
X
)
=
E
[
(
X
−
E
[
X
]
)
4
]
−
3
(
E
[
(
X
−
E
[
X
]
)
2
]
)
2
=
{\textstyle \kappa _{4}(X)=\operatorname {E} \left[{\left(X-\operatorname {E} [X]\right)}^{4}\right]-3\left(\operatorname {E} \left[{\left(X-\operatorname {E} [X]\right)}^{2}\right]\right)^{2}={}}
the fourth central moment minus three times the square of the second central moment. Thus this is the first case in which cumulants are not simply moments or central moments. The central moments of degree more than 3 lack the cumulative property.
κ
5
(
X
)
=
E
[
(
X
−
E
[
X
]
)
5
]
−
10
E
[
(
X
−
E
[
X
]
)
3
]
E
[
(
X
−
E
[
X
]
)
2
]
.
{\textstyle \kappa _{5}(X)=\operatorname {E} \left[{\left(X-\operatorname {E} [X]\right)}^{5}\right]-10\operatorname {E} \left[{\left(X-\operatorname {E} [X]\right)}^{3}\right]\operatorname {E} \left[{\left(X-\operatorname {E} [X]\right)}^{2}\right].}
== Cumulants of some discrete probability distributions ==
The constant random variables X = μ. The cumulant generating function is K(t) = μt. The first cumulant is κ1 = K′(0) = μ and the other cumulants are zero, κ2 = κ3 = κ4 = ⋅⋅⋅ = 0.
The Bernoulli distributions, (number of successes in one trial with probability p of success). The cumulant generating function is K(t) = log(1 − p + pet). The first cumulants are κ1 = K '(0) = p and κ2 = K′′(0) = p·(1 − p). The cumulants satisfy a recursion formula
κ
n
+
1
=
p
(
1
−
p
)
d
κ
n
d
p
.
{\displaystyle \kappa _{n+1}=p(1-p){\frac {d\kappa _{n}}{dp}}.}
The geometric distributions, (number of failures before one success with probability p of success on each trial). The cumulant generating function is K(t) = log(p / (1 + (p − 1)et)). The first cumulants are κ1 = K′(0) = p−1 − 1, and κ2 = K′′(0) = κ1p−1. Substituting p = (μ + 1)−1 gives K(t) = −log(1 + μ(1−et)) and κ1 = μ.
The Poisson distributions. The cumulant generating function is K(t) = μ(et − 1). All cumulants are equal to the parameter: κ1 = κ2 = κ3 = ... = μ.
The binomial distributions, (number of successes in n independent trials with probability p of success on each trial). The special case n = 1 is a Bernoulli distribution. Every cumulant is just n times the corresponding cumulant of the corresponding Bernoulli distribution. The cumulant generating function is K(t) = n log(1 − p + pet). The first cumulants are κ1 = K′(0) = np and κ2 = K′′(0) = κ1(1 − p). Substituting p = μ·n−1 gives K '(t) = ((μ−1 − n−1)·e−t + n−1)−1 and κ1 = μ. The limiting case n → +∞ is a Poisson distribution.
The negative binomial distributions, (number of failures before r successes with probability p of success on each trial). The special case r = 1 is a geometric distribution. Every cumulant is just r times the corresponding cumulant of the corresponding geometric distribution. The derivative of the cumulant generating function is K′(t) = r·((1 − p)−1·e−t−1)−1. The first cumulants are κ1 = K′(0) = r·(p−1−1), and κ2 = K′′(0) = κ1·p−1. Substituting p = (μ·r−1+1)−1 gives K′(t) = ((μ−1 + r−1)e−t − r−1)−1 and κ1 = μ. Comparing these formulas to those of the binomial distributions explains the name 'negative binomial distribution'. The limiting case r → +∞ is a Poisson distribution.
Introducing the variance-to-mean ratio
ε
=
μ
−
1
σ
2
=
κ
1
−
1
κ
2
,
{\displaystyle \varepsilon =\mu ^{-1}\sigma ^{2}=\kappa _{1}^{-1}\kappa _{2},}
the above probability distributions get a unified formula for the derivative of the cumulant generating function:
K
′
(
t
)
=
(
1
+
(
e
−
t
−
1
)
ε
)
−
1
μ
{\displaystyle K'(t)=(1+(e^{-t}-1)\varepsilon )^{-1}\mu }
The second derivative is
K
″
(
t
)
=
(
ε
−
(
ε
−
1
)
e
t
)
−
2
μ
ε
e
t
{\displaystyle K''(t)=(\varepsilon -(\varepsilon -1)e^{t})^{-2}\mu \varepsilon e^{t}}
confirming that the first cumulant is κ1 = K′(0) = μ and the second cumulant is κ2 = K′′(0) = με.
The constant random variables X = μ have ε = 0.
The binomial distributions have ε = 1 − p so that 0 < ε < 1.
The Poisson distributions have ε = 1.
The negative binomial distributions have ε = p−1 so that ε > 1.
Note the analogy to the classification of conic sections by eccentricity: circles ε = 0, ellipses 0 < ε < 1, parabolas ε = 1, hyperbolas ε > 1.
== Cumulants of some continuous probability distributions ==
For the normal distribution with expected value μ and variance σ2, the cumulant generating function is K(t) = μt + σ2t2/2. The first and second derivatives of the cumulant generating function are K′(t) = μ + σ2·t and K′′(t) = σ2. The cumulants are κ1 = μ, κ2 = σ2, and κ3 = κ4 = ⋅⋅⋅ = 0. The special case σ2 = 0 is a constant random variable X = μ.
The cumulants of the uniform distribution on the interval [−1, 0] are κn = Bn /n, where Bn is the nth Bernoulli number.
The cumulants of the exponential distribution with rate parameter λ are κn = λ−n (n − 1)!.
== Some properties of the cumulant generating function ==
The cumulant generating function K(t), if it exists, is infinitely differentiable and convex, and passes through the origin. Its first derivative ranges monotonically in the open interval from the infimum to the supremum of the support of the probability distribution, and its second derivative is strictly positive everywhere it is defined, except for the degenerate distribution of a single point mass. The cumulant-generating function exists if and only if the tails of the distribution are majorized by an exponential decay, that is, (see Big O notation)
∃
c
>
0
,
F
(
x
)
=
O
(
e
c
x
)
,
x
→
−
∞
;
and
∃
d
>
0
,
1
−
F
(
x
)
=
O
(
e
−
d
x
)
,
x
→
+
∞
;
{\displaystyle {\begin{aligned}&\exists c>0,\,\,F(x)=O(e^{cx}),x\to -\infty ;{\text{ and}}\\[4pt]&\exists d>0,\,\,1-F(x)=O(e^{-dx}),x\to +\infty ;\end{aligned}}}
where
F
{\textstyle F}
is the cumulative distribution function. The cumulant-generating function will have vertical asymptote(s) at the negative supremum of such c, if such a supremum exists, and at the supremum of such d, if such a supremum exists, otherwise it will be defined for all real numbers.
If the support of a random variable X has finite upper or lower bounds, then its cumulant-generating function y = K(t), if it exists, approaches asymptote(s) whose slope is equal to the supremum or infimum of the support,
y
=
(
t
+
1
)
inf
supp
X
−
μ
(
X
)
,
and
y
=
(
t
−
1
)
sup
supp
X
+
μ
(
X
)
,
{\displaystyle {\begin{aligned}y&=(t+1)\inf \operatorname {supp} X-\mu (X),{\text{ and}}\\[5pt]y&=(t-1)\sup \operatorname {supp} X+\mu (X),\end{aligned}}}
respectively, lying above both these lines everywhere. (The integrals
∫
−
∞
0
[
t
inf
supp
X
−
K
′
(
t
)
]
d
t
,
∫
∞
0
[
t
inf
supp
X
−
K
′
(
t
)
]
d
t
{\displaystyle \int _{-\infty }^{0}\left[t\inf \operatorname {supp} X-K'(t)\right]\,dt,\qquad \int _{\infty }^{0}\left[t\inf \operatorname {supp} X-K'(t)\right]\,dt}
yield the y-intercepts of these asymptotes, since K(0) = 0.)
For a shift of the distribution by c,
K
X
+
c
(
t
)
=
K
X
(
t
)
+
c
t
.
{\textstyle K_{X+c}(t)=K_{X}(t)+ct.}
For a degenerate point mass at c, the cumulant generating function is the straight line
K
c
(
t
)
=
c
t
{\textstyle K_{c}(t)=ct}
, and more generally,
K
X
+
Y
=
K
X
+
K
Y
{\textstyle K_{X+Y}=K_{X}+K_{Y}}
if and only if X and Y are independent and their cumulant generating functions exist; (subindependence and the existence of second moments sufficing to imply independence.)
The natural exponential family of a distribution may be realized by shifting or translating K(t), and adjusting it vertically so that it always passes through the origin: if f is the pdf with cumulant generating function
K
(
t
)
=
log
M
(
t
)
,
{\textstyle K(t)=\log M(t),}
and
f
|
θ
{\textstyle f|\theta }
is its natural exponential family, then
f
(
x
∣
θ
)
=
1
M
(
θ
)
e
θ
x
f
(
x
)
,
{\textstyle f(x\mid \theta )={\frac {1}{M(\theta )}}e^{\theta x}f(x),}
and
K
(
t
∣
θ
)
=
K
(
t
+
θ
)
−
K
(
θ
)
.
{\textstyle K(t\mid \theta )=K(t+\theta )-K(\theta ).}
If K(t) is finite for a range t1 < Re(t) < t2 then if t1 < 0 < t2 then K(t) is analytic and infinitely differentiable for t1 < Re(t) < t2. Moreover for t real and t1 < t < t2 K(t) is strictly convex, and K′(t) is strictly increasing.
== Further properties of cumulants ==
=== A negative result ===
Given the results for the cumulants of the normal distribution, it might be hoped to find families of distributions for which
κm = κm+1 = ⋯ = 0 for some m > 3, with the lower-order cumulants (orders 3 to m − 1) being non-zero. There are no such distributions. The underlying result here is that the cumulant generating function cannot be a finite-order polynomial of degree greater than 2.
=== Cumulants and moments ===
The moment generating function is given by:
M
(
t
)
=
1
+
∑
n
=
1
∞
μ
n
′
t
n
n
!
=
exp
(
∑
n
=
1
∞
κ
n
t
n
n
!
)
=
exp
(
K
(
t
)
)
.
{\displaystyle M(t)=1+\sum _{n=1}^{\infty }{\frac {\mu '_{n}t^{n}}{n!}}=\exp \left(\sum _{n=1}^{\infty }{\frac {\kappa _{n}t^{n}}{n!}}\right)=\exp(K(t)).}
So the cumulant generating function is the logarithm of the moment generating function
K
(
t
)
=
log
M
(
t
)
.
{\displaystyle K(t)=\log M(t).}
The first cumulant is the expected value; the second and third cumulants are respectively the second and third central moments (the second central moment is the variance); but the higher cumulants are neither moments nor central moments, but rather more complicated polynomial functions of the moments.
The moments can be recovered in terms of cumulants by evaluating the nth derivative of
exp
(
K
(
t
)
)
{\textstyle \exp(K(t))}
at
t
=
0
{\displaystyle t=0}
,
μ
n
′
=
M
(
n
)
(
0
)
=
d
n
exp
(
K
(
t
)
)
d
t
n
|
t
=
0
.
{\displaystyle \mu '_{n}=M^{(n)}(0)=\left.{\frac {\mathrm {d} ^{n}\exp(K(t))}{\mathrm {d} t^{n}}}\right|_{t=0}.}
Likewise, the cumulants can be recovered in terms of moments by evaluating the nth derivative of
log
M
(
t
)
{\textstyle \log M(t)}
at
t
=
0
{\displaystyle t=0}
,
κ
n
=
K
(
n
)
(
0
)
=
d
n
log
M
(
t
)
d
t
n
|
t
=
0
.
{\displaystyle \kappa _{n}=K^{(n)}(0)=\left.{\frac {\mathrm {d} ^{n}\log M(t)}{\mathrm {d} t^{n}}}\right|_{t=0}.}
The explicit expression for the nth moment in terms of the first n cumulants, and vice versa, can be obtained by using Faà di Bruno's formula for higher derivatives of composite functions. In general, we have
μ
n
′
=
∑
k
=
1
n
B
n
,
k
(
κ
1
,
…
,
κ
n
−
k
+
1
)
{\displaystyle \mu '_{n}=\sum _{k=1}^{n}B_{n,k}(\kappa _{1},\ldots ,\kappa _{n-k+1})}
κ
n
=
∑
k
=
1
n
(
−
1
)
k
−
1
(
k
−
1
)
!
B
n
,
k
(
μ
1
′
,
…
,
μ
n
−
k
+
1
′
)
,
{\displaystyle \kappa _{n}=\sum _{k=1}^{n}(-1)^{k-1}(k-1)!B_{n,k}(\mu '_{1},\ldots ,\mu '_{n-k+1}),}
where
B
n
,
k
{\textstyle B_{n,k}}
are incomplete (or partial) Bell polynomials.
In the like manner, if the mean is given by
μ
{\textstyle \mu }
, the central moment generating function is given by
C
(
t
)
=
E
[
e
t
(
x
−
μ
)
]
=
e
−
μ
t
M
(
t
)
=
exp
(
K
(
t
)
−
μ
t
)
,
{\displaystyle C(t)=\operatorname {E} [e^{t(x-\mu )}]=e^{-\mu t}M(t)=\exp(K(t)-\mu t),}
and the nth central moment is obtained in terms of cumulants as
μ
n
=
C
(
n
)
(
0
)
=
d
n
d
t
n
exp
(
K
(
t
)
−
μ
t
)
|
t
=
0
=
∑
k
=
1
n
B
n
,
k
(
0
,
κ
2
,
…
,
κ
n
−
k
+
1
)
.
{\displaystyle \mu _{n}=C^{(n)}(0)=\left.{\frac {\mathrm {d} ^{n}}{\mathrm {d} t^{n}}}\exp(K(t)-\mu t)\right|_{t=0}=\sum _{k=1}^{n}B_{n,k}(0,\kappa _{2},\ldots ,\kappa _{n-k+1}).}
Also, for n > 1, the nth cumulant in terms of the central moments is
κ
n
=
K
(
n
)
(
0
)
=
d
n
d
t
n
(
log
C
(
t
)
+
μ
t
)
|
t
=
0
=
∑
k
=
1
n
(
−
1
)
k
−
1
(
k
−
1
)
!
B
n
,
k
(
0
,
μ
2
,
…
,
μ
n
−
k
+
1
)
.
{\displaystyle {\begin{aligned}\kappa _{n}&=K^{(n)}(0)=\left.{\frac {\mathrm {d} ^{n}}{\mathrm {d} t^{n}}}(\log C(t)+\mu t)\right|_{t=0}\\[4pt]&=\sum _{k=1}^{n}(-1)^{k-1}(k-1)!B_{n,k}(0,\mu _{2},\ldots ,\mu _{n-k+1}).\end{aligned}}}
The nth moment μ′n is an nth-degree polynomial in the first n cumulants. The first few expressions are:
μ
1
′
=
κ
1
μ
2
′
=
κ
2
+
κ
1
2
μ
3
′
=
κ
3
+
3
κ
2
κ
1
+
κ
1
3
μ
4
′
=
κ
4
+
4
κ
3
κ
1
+
3
κ
2
2
+
6
κ
2
κ
1
2
+
κ
1
4
μ
5
′
=
κ
5
+
5
κ
4
κ
1
+
10
κ
3
κ
2
+
10
κ
3
κ
1
2
+
15
κ
2
2
κ
1
+
10
κ
2
κ
1
3
+
κ
1
5
μ
6
′
=
κ
6
+
6
κ
5
κ
1
+
15
κ
4
κ
2
+
15
κ
4
κ
1
2
+
10
κ
3
2
+
60
κ
3
κ
2
κ
1
+
20
κ
3
κ
1
3
+
15
κ
2
3
+
45
κ
2
2
κ
1
2
+
15
κ
2
κ
1
4
+
κ
1
6
.
{\displaystyle {\begin{aligned}\mu '_{1}={}&\kappa _{1}\\[5pt]\mu '_{2}={}&\kappa _{2}+\kappa _{1}^{2}\\[5pt]\mu '_{3}={}&\kappa _{3}+3\kappa _{2}\kappa _{1}+\kappa _{1}^{3}\\[5pt]\mu '_{4}={}&\kappa _{4}+4\kappa _{3}\kappa _{1}+3\kappa _{2}^{2}+6\kappa _{2}\kappa _{1}^{2}+\kappa _{1}^{4}\\[5pt]\mu '_{5}={}&\kappa _{5}+5\kappa _{4}\kappa _{1}+10\kappa _{3}\kappa _{2}+10\kappa _{3}\kappa _{1}^{2}+15\kappa _{2}^{2}\kappa _{1}+10\kappa _{2}\kappa _{1}^{3}+\kappa _{1}^{5}\\[5pt]\mu '_{6}={}&\kappa _{6}+6\kappa _{5}\kappa _{1}+15\kappa _{4}\kappa _{2}+15\kappa _{4}\kappa _{1}^{2}+10\kappa _{3}^{2}+60\kappa _{3}\kappa _{2}\kappa _{1}+20\kappa _{3}\kappa _{1}^{3}\\&{}+15\kappa _{2}^{3}+45\kappa _{2}^{2}\kappa _{1}^{2}+15\kappa _{2}\kappa _{1}^{4}+\kappa _{1}^{6}.\end{aligned}}}
The "prime" distinguishes the moments μ′n from the central moments μn. To express the central moments as functions of the cumulants, just drop from these polynomials all terms in which κ1 appears as a factor:
μ
1
=
0
μ
2
=
κ
2
μ
3
=
κ
3
μ
4
=
κ
4
+
3
κ
2
2
μ
5
=
κ
5
+
10
κ
3
κ
2
μ
6
=
κ
6
+
15
κ
4
κ
2
+
10
κ
3
2
+
15
κ
2
3
.
{\displaystyle {\begin{aligned}\mu _{1}&=0\\[4pt]\mu _{2}&=\kappa _{2}\\[4pt]\mu _{3}&=\kappa _{3}\\[4pt]\mu _{4}&=\kappa _{4}+3\kappa _{2}^{2}\\[4pt]\mu _{5}&=\kappa _{5}+10\kappa _{3}\kappa _{2}\\[4pt]\mu _{6}&=\kappa _{6}+15\kappa _{4}\kappa _{2}+10\kappa _{3}^{2}+15\kappa _{2}^{3}.\end{aligned}}}
Similarly, the nth cumulant κn is an nth-degree polynomial in the first n non-central moments. The first few expressions are:
κ
1
=
μ
1
′
κ
2
=
μ
2
′
−
μ
1
′
2
κ
3
=
μ
3
′
−
3
μ
2
′
μ
1
′
+
2
μ
1
′
3
κ
4
=
μ
4
′
−
4
μ
3
′
μ
1
′
−
3
μ
2
′
2
+
12
μ
2
′
μ
1
′
2
−
6
μ
1
′
4
κ
5
=
μ
5
′
−
5
μ
4
′
μ
1
′
−
10
μ
3
′
μ
2
′
+
20
μ
3
′
μ
1
′
2
+
30
μ
2
′
2
μ
1
′
−
60
μ
2
′
μ
1
′
3
+
24
μ
1
′
5
κ
6
=
μ
6
′
−
6
μ
5
′
μ
1
′
−
15
μ
4
′
μ
2
′
+
30
μ
4
′
μ
1
′
2
−
10
μ
3
′
2
+
120
μ
3
′
μ
2
′
μ
1
′
−
120
μ
3
′
μ
1
′
3
+
30
μ
2
′
3
−
270
μ
2
′
2
μ
1
′
2
+
360
μ
2
′
μ
1
′
4
−
120
μ
1
′
6
.
{\displaystyle {\begin{aligned}\kappa _{1}={}&\mu '_{1}\\[4pt]\kappa _{2}={}&\mu '_{2}-{\mu '_{1}}^{2}\\[4pt]\kappa _{3}={}&\mu '_{3}-3\mu '_{2}\mu '_{1}+2{\mu '_{1}}^{3}\\[4pt]\kappa _{4}={}&\mu '_{4}-4\mu '_{3}\mu '_{1}-3{\mu '_{2}}^{2}+12\mu '_{2}{\mu '_{1}}^{2}-6{\mu '_{1}}^{4}\\[4pt]\kappa _{5}={}&\mu '_{5}-5\mu '_{4}\mu '_{1}-10\mu '_{3}\mu '_{2}+20\mu '_{3}{\mu '_{1}}^{2}+30{\mu '_{2}}^{2}\mu '_{1}-60\mu '_{2}{\mu '_{1}}^{3}+24{\mu '_{1}}^{5}\\[4pt]\kappa _{6}={}&\mu '_{6}-6\mu '_{5}\mu '_{1}-15\mu '_{4}\mu '_{2}+30\mu '_{4}{\mu '_{1}}^{2}-10{\mu '_{3}}^{2}+120\mu '_{3}\mu '_{2}\mu '_{1}\\&{}-120\mu '_{3}{\mu '_{1}}^{3}+30{\mu '_{2}}^{3}-270{\mu '_{2}}^{2}{\mu '_{1}}^{2}+360\mu '_{2}{\mu '_{1}}^{4}-120{\mu '_{1}}^{6}\,.\end{aligned}}}
In general, the cumulant is the determinant of a matrix:
κ
l
=
(
−
1
)
l
+
1
|
μ
1
′
1
0
0
0
0
…
0
μ
2
′
μ
1
′
1
0
0
0
…
0
μ
3
′
μ
2
′
(
2
1
)
μ
1
′
1
0
0
…
0
μ
4
′
μ
3
′
(
3
1
)
μ
2
′
(
3
2
)
μ
1
′
1
0
…
0
μ
5
′
μ
4
′
(
4
1
)
μ
3
′
(
4
2
)
μ
2
′
(
4
3
)
μ
1
′
1
…
0
⋮
⋮
⋮
⋮
⋮
⋱
⋱
⋮
μ
l
−
1
′
μ
l
−
2
′
…
…
…
…
⋱
1
μ
l
′
μ
l
−
1
′
…
…
…
…
…
(
l
−
1
l
−
2
)
μ
1
′
|
{\displaystyle \kappa _{l}=(-1)^{l+1}\left|{\begin{array}{cccccccc}\mu '_{1}&1&0&0&0&0&\ldots &0\\\mu '_{2}&\mu '_{1}&1&0&0&0&\ldots &0\\\mu '_{3}&\mu '_{2}&\left({\begin{array}{l}2\\1\end{array}}\right)\mu '_{1}&1&0&0&\ldots &0\\\mu '_{4}&\mu '_{3}&\left({\begin{array}{l}3\\1\end{array}}\right)\mu '_{2}&\left({\begin{array}{l}3\\2\end{array}}\right)\mu '_{1}&1&0&\ldots &0\\\mu '_{5}&\mu '_{4}&\left({\begin{array}{l}4\\1\end{array}}\right)\mu '_{3}&\left({\begin{array}{l}4\\2\end{array}}\right)\mu '_{2}&\left({\begin{array}{c}4\\3\end{array}}\right)\mu '_{1}&1&\ldots &0\\\vdots &\vdots &\vdots &\vdots &\vdots &\ddots &\ddots &\vdots \\\mu '_{l-1}&\mu '_{l-2}&\ldots &\ldots &\ldots &\ldots &\ddots &1\\\mu '_{l}&\mu '_{l-1}&\ldots &\ldots &\ldots &\ldots &\ldots &\left({\begin{array}{l}l-1\\l-2\end{array}}\right)\mu '_{1}\end{array}}\right|}
To express the cumulants κn for n > 1 as functions of the central moments, drop from these polynomials all terms in which μ'1 appears as a factor:
κ
2
=
μ
2
{\displaystyle \kappa _{2}=\mu _{2}\,}
κ
3
=
μ
3
{\displaystyle \kappa _{3}=\mu _{3}\,}
κ
4
=
μ
4
−
3
μ
2
2
{\displaystyle \kappa _{4}=\mu _{4}-3{\mu _{2}}^{2}\,}
κ
5
=
μ
5
−
10
μ
3
μ
2
{\displaystyle \kappa _{5}=\mu _{5}-10\mu _{3}\mu _{2}\,}
κ
6
=
μ
6
−
15
μ
4
μ
2
−
10
μ
3
2
+
30
μ
2
3
.
{\displaystyle \kappa _{6}=\mu _{6}-15\mu _{4}\mu _{2}-10{\mu _{3}}^{2}+30{\mu _{2}}^{3}\,.}
The cumulants can be related to the moments by differentiating the relationship log M(t) = K(t) with respect to t, giving M′(t) = K′(t) M(t), which conveniently contains no exponentials or logarithms. Equating the coefficient of t n−1 / (n−1)! on the left and right sides and using μ′0 = 1 gives the following formulas for n ≥ 1:
μ
1
′
=
κ
1
μ
2
′
=
κ
1
μ
1
′
+
κ
2
μ
3
′
=
κ
1
μ
2
′
+
2
κ
2
μ
1
′
+
κ
3
μ
4
′
=
κ
1
μ
3
′
+
3
κ
2
μ
2
′
+
3
κ
3
μ
1
′
+
κ
4
μ
5
′
=
κ
1
μ
4
′
+
4
κ
2
μ
3
′
+
6
κ
3
μ
2
′
+
4
κ
4
μ
1
′
+
κ
5
μ
6
′
=
κ
1
μ
5
′
+
5
κ
2
μ
4
′
+
10
κ
3
μ
3
′
+
10
κ
4
μ
2
′
+
5
κ
5
μ
1
′
+
κ
6
μ
n
′
=
∑
m
=
1
n
−
1
(
n
−
1
m
−
1
)
κ
m
μ
n
−
m
′
+
κ
n
.
{\displaystyle {\begin{aligned}\mu '_{1}={}&\kappa _{1}\\[1pt]\mu '_{2}={}&\kappa _{1}\mu '_{1}+\kappa _{2}\\[1pt]\mu '_{3}={}&\kappa _{1}\mu '_{2}+2\kappa _{2}\mu '_{1}+\kappa _{3}\\[1pt]\mu '_{4}={}&\kappa _{1}\mu '_{3}+3\kappa _{2}\mu '_{2}+3\kappa _{3}\mu '_{1}+\kappa _{4}\\[1pt]\mu '_{5}={}&\kappa _{1}\mu '_{4}+4\kappa _{2}\mu '_{3}+6\kappa _{3}\mu '_{2}+4\kappa _{4}\mu '_{1}+\kappa _{5}\\[1pt]\mu '_{6}={}&\kappa _{1}\mu '_{5}+5\kappa _{2}\mu '_{4}+10\kappa _{3}\mu '_{3}+10\kappa _{4}\mu '_{2}+5\kappa _{5}\mu '_{1}+\kappa _{6}\\[1pt]\mu '_{n}={}&\sum _{m=1}^{n-1}{n-1 \choose m-1}\kappa _{m}\mu '_{n-m}+\kappa _{n}\,.\end{aligned}}}
These allow either
κ
n
{\textstyle \kappa _{n}}
or
μ
n
′
{\textstyle \mu '_{n}}
to be computed from the other using knowledge of the lower-order cumulants and moments. The corresponding formulas for the central moments
μ
n
{\textstyle \mu _{n}}
for
n
≥
2
{\textstyle n\geq 2}
are formed from these formulas by setting
μ
1
′
=
κ
1
=
0
{\textstyle \mu '_{1}=\kappa _{1}=0}
and replacing each
μ
n
′
{\textstyle \mu '_{n}}
with
μ
n
{\textstyle \mu _{n}}
for
n
≥
2
{\textstyle n\geq 2}
:
μ
2
=
κ
2
μ
3
=
κ
3
μ
n
=
∑
m
=
2
n
−
2
(
n
−
1
m
−
1
)
κ
m
μ
n
−
m
+
κ
n
.
{\displaystyle {\begin{aligned}\mu _{2}={}&\kappa _{2}\\[1pt]\mu _{3}={}&\kappa _{3}\\[1pt]\mu _{n}={}&\sum _{m=2}^{n-2}{n-1 \choose m-1}\kappa _{m}\mu _{n-m}+\kappa _{n}\,.\end{aligned}}}
=== Cumulants and set-partitions ===
These polynomials have a remarkable combinatorial interpretation: the coefficients count certain partitions of sets. A general form of these polynomials is
μ
n
′
=
∑
π
∈
Π
∏
B
∈
π
κ
|
B
|
{\displaystyle \mu '_{n}=\sum _{\pi \,\in \,\Pi }\prod _{B\,\in \,\pi }\kappa _{|B|}}
where
π runs through the list of all partitions of a set of size n;
"B ∈ π" means B is one of the "blocks" into which the set is partitioned; and
|B| is the size of the set B.
Thus each monomial is a constant times a product of cumulants in which the sum of the indices is n (e.g., in the term κ3 κ22 κ1, the sum of the indices is 3 + 2 + 2 + 1 = 8; this appears in the polynomial that expresses the 8th moment as a function of the first eight cumulants). A partition of the integer n corresponds to each term. The coefficient in each term is the number of partitions of a set of n members that collapse to that partition of the integer n when the members of the set become indistinguishable.
=== Cumulants and combinatorics ===
Further connection between cumulants and combinatorics can be found in the work of Gian-Carlo Rota, where links to invariant theory, symmetric functions, and binomial sequences are studied via umbral calculus.
== Joint cumulants ==
The joint cumulant κ of several random variables X1, ..., Xn is defined as the coefficient κ1,...,1(X1, ..., Xn) in the Maclaurin series of the multivariate cumulant generating function, see Section 3.1 in,
G
(
t
1
,
…
,
t
n
)
=
log
E
(
e
∑
j
=
1
n
t
j
X
j
)
=
∑
k
1
,
…
,
k
n
κ
k
1
,
…
,
k
n
t
1
k
1
⋯
t
n
k
n
k
1
!
⋯
k
n
!
.
{\displaystyle G(t_{1},\dots ,t_{n})=\log \mathrm {E} (\mathrm {e} ^{\sum _{j=1}^{n}t_{j}X_{j}})=\sum _{k_{1},\ldots ,k_{n}}\kappa _{k_{1},\ldots ,k_{n}}{\frac {t_{1}^{k_{1}}\cdots t_{n}^{k_{n}}}{k_{1}!\cdots k_{n}!}}\,.}
Note that
κ
k
1
,
…
,
k
n
=
(
d
d
t
1
)
k
1
⋯
(
d
d
t
n
)
k
n
G
(
t
1
,
…
,
t
n
)
|
t
1
=
⋯
=
t
n
=
0
,
{\displaystyle \kappa _{k_{1},\dots ,k_{n}}=\left.\left({\frac {\mathrm {d} }{\mathrm {d} t_{1}}}\right)^{k_{1}}\cdots \left({\frac {\mathrm {d} }{\mathrm {d} t_{n}}}\right)^{k_{n}}G(t_{1},\dots ,t_{n})\right|_{t_{1}=\dots =t_{n}=0}\,,}
and, in particular
κ
(
X
1
,
…
,
X
n
)
=
d
n
d
t
1
⋯
d
t
n
G
(
t
1
,
…
,
t
n
)
|
t
1
=
⋯
=
t
n
=
0
.
{\displaystyle \kappa (X_{1},\ldots ,X_{n})=\left.{\frac {\mathrm {d} ^{n}}{\mathrm {d} t_{1}\cdots \mathrm {d} t_{n}}}G(t_{1},\dots ,t_{n})\right|_{t_{1}=\dots =t_{n}=0}\,.}
As with a single variable, the generating function and cumulant can instead be defined via
H
(
t
1
,
…
,
t
n
)
=
log
E
(
e
∑
j
=
1
n
i
t
j
X
j
)
=
∑
k
1
,
…
,
k
n
κ
k
1
,
…
,
k
n
i
k
1
+
⋯
+
k
n
t
1
k
1
⋯
t
n
k
n
k
1
!
⋯
k
n
!
,
{\displaystyle H(t_{1},\dots ,t_{n})=\log \mathrm {E} (\mathrm {e} ^{\sum _{j=1}^{n}it_{j}X_{j}})=\sum _{k_{1},\ldots ,k_{n}}\kappa _{k_{1},\ldots ,k_{n}}i^{k_{1}+\cdots +k_{n}}{\frac {t_{1}^{k_{1}}\cdots t_{n}^{k_{n}}}{k_{1}!\cdots k_{n}!}}\,,}
in which case
κ
k
1
,
…
,
k
n
=
(
−
i
)
k
1
+
⋯
+
k
n
(
d
d
t
1
)
k
1
⋯
(
d
d
t
n
)
k
n
H
(
t
1
,
…
,
t
n
)
|
t
1
=
⋯
=
t
n
=
0
,
{\displaystyle \kappa _{k_{1},\dots ,k_{n}}=(-i)^{k_{1}+\cdots +k_{n}}\left.\left({\frac {\mathrm {d} }{\mathrm {d} t_{1}}}\right)^{k_{1}}\cdots \left({\frac {\mathrm {d} }{\mathrm {d} t_{n}}}\right)^{k_{n}}H(t_{1},\dots ,t_{n})\right|_{t_{1}=\dots =t_{n}=0}\,,}
and
κ
(
X
1
,
…
,
X
n
)
=
(
−
i
)
n
d
n
d
t
1
⋯
d
t
n
H
(
t
1
,
…
,
t
n
)
|
t
1
=
⋯
=
t
n
=
0
.
{\displaystyle \kappa (X_{1},\ldots ,X_{n})=\left.(-i)^{n}{\frac {\mathrm {d} ^{n}}{\mathrm {d} t_{1}\cdots \mathrm {d} t_{n}}}H(t_{1},\dots ,t_{n})\right|_{t_{1}=\dots =t_{n}=0}\,.}
=== Repeated random variables and relation between the coefficients κk1, ..., kn ===
Observe that
κ
k
1
,
…
,
k
n
(
X
1
,
…
,
X
n
)
{\textstyle \kappa _{k_{1},\dots ,k_{n}}(X_{1},\ldots ,X_{n})}
can also be written as
κ
k
1
,
…
,
k
n
=
d
k
1
d
t
1
,
1
⋯
d
t
1
,
k
1
⋯
d
k
n
d
t
n
,
1
⋯
d
t
n
,
k
n
G
(
∑
j
=
1
k
1
t
1
,
j
,
…
,
∑
j
=
1
k
n
t
n
,
j
)
|
t
i
,
j
=
0
,
{\displaystyle \kappa _{k_{1},\dots ,k_{n}}=\left.{\frac {\mathrm {d} ^{k_{1}}}{\mathrm {d} t_{1,1}\cdots \mathrm {d} t_{1,k_{1}}}}\cdots {\frac {\mathrm {d} ^{k_{n}}}{\mathrm {d} t_{n,1}\cdots \mathrm {d} t_{n,k_{n}}}}G\left(\sum _{j=1}^{k_{1}}t_{1,j},\dots ,\sum _{j=1}^{k_{n}}t_{n,j}\right)\right|_{t_{i,j}=0},}
from which we conclude that
κ
k
1
,
…
,
k
n
(
X
1
,
…
,
X
n
)
=
κ
1
,
…
,
1
(
X
1
,
…
,
X
1
⏟
k
1
,
…
,
X
n
,
…
,
X
n
⏟
k
n
)
.
{\displaystyle \kappa _{k_{1},\dots ,k_{n}}(X_{1},\ldots ,X_{n})=\kappa _{1,\ldots ,1}(\underbrace {X_{1},\dots ,X_{1}} _{k_{1}},\ldots ,\underbrace {X_{n},\dots ,X_{n}} _{k_{n}}).}
For example
κ
2
,
0
,
1
(
X
,
Y
,
Z
)
=
κ
(
X
,
X
,
Z
)
,
{\displaystyle \kappa _{2,0,1}(X,Y,Z)=\kappa (X,X,Z),\,}
and
κ
0
,
0
,
n
,
0
(
X
,
Y
,
Z
,
T
)
=
κ
n
(
Z
)
=
κ
(
Z
,
…
,
Z
⏟
n
)
.
{\displaystyle \kappa _{0,0,n,0}(X,Y,Z,T)=\kappa _{n}(Z)=\kappa (\underbrace {Z,\dots ,Z} _{n}).\,}
In particular, the last equality shows that the cumulants of a single random variable are the joint cumulants of multiple copies of that random variable.
=== Relation with mixed moments ===
The joint cumulant of random variables can be expressed as an alternate sum of products of their mixed moments, see Equation (3.2.7) in,
κ
(
X
1
,
…
,
X
n
)
=
∑
π
(
|
π
|
−
1
)
!
(
−
1
)
|
π
|
−
1
∏
B
∈
π
E
(
∏
i
∈
B
X
i
)
{\displaystyle \kappa (X_{1},\dots ,X_{n})=\sum _{\pi }(|\pi |-1)!(-1)^{|\pi |-1}\prod _{B\in \pi }E\left(\prod _{i\in B}X_{i}\right)}
where π runs through the list of all partitions of {1, ..., n}; where B runs through the list of all blocks of the partition π; and where |π| is the number of parts in the partition.
For example,
κ
(
X
)
=
E
(
X
)
,
{\displaystyle \kappa (X)=\operatorname {E} (X),}
is the expected value of
X
{\textstyle X}
,
κ
(
X
,
Y
)
=
E
(
X
Y
)
−
E
(
X
)
E
(
Y
)
,
{\displaystyle \kappa (X,Y)=\operatorname {E} (XY)-\operatorname {E} (X)\operatorname {E} (Y),}
is the covariance of
X
{\textstyle X}
and
Y
{\textstyle Y}
, and
κ
(
X
,
Y
,
Z
)
=
E
(
X
Y
Z
)
−
E
(
X
Y
)
E
(
Z
)
−
E
(
X
Z
)
E
(
Y
)
−
E
(
Y
Z
)
E
(
X
)
+
2
E
(
X
)
E
(
Y
)
E
(
Z
)
.
{\displaystyle \kappa (X,Y,Z)=\operatorname {E} (XYZ)-\operatorname {E} (XY)\operatorname {E} (Z)-\operatorname {E} (XZ)\operatorname {E} (Y)-\operatorname {E} (YZ)\operatorname {E} (X)+2\operatorname {E} (X)\operatorname {E} (Y)\operatorname {E} (Z).\,}
For zero-mean random variables
X
1
,
…
,
X
n
{\textstyle X_{1},\ldots ,X_{n}}
, any mixed moment of the form
∏
B
∈
π
E
(
∏
i
∈
B
X
i
)
{\textstyle \prod _{B\in \pi }E\left(\prod _{i\in B}X_{i}\right)}
vanishes if
π
{\textstyle \pi }
is a partition of
{
1
,
…
,
n
}
{\textstyle \{1,\ldots ,n\}}
which contains a singleton
B
=
{
k
}
{\textstyle B=\{k\}}
.
Hence, the expression of their joint cumulant in terms of mixed moments simplifies.
For example, if X,Y,Z,W are zero mean random variables, we have
κ
(
X
,
Y
,
Z
)
=
E
(
X
Y
Z
)
.
{\displaystyle \kappa (X,Y,Z)=\operatorname {E} (XYZ).\,}
κ
(
X
,
Y
,
Z
,
W
)
=
E
(
X
Y
Z
W
)
−
E
(
X
Y
)
E
(
Z
W
)
−
E
(
X
Z
)
E
(
Y
W
)
−
E
(
X
W
)
E
(
Y
Z
)
.
{\displaystyle \kappa (X,Y,Z,W)=\operatorname {E} (XYZW)-\operatorname {E} (XY)\operatorname {E} (ZW)-\operatorname {E} (XZ)\operatorname {E} (YW)-\operatorname {E} (XW)\operatorname {E} (YZ).\,}
More generally, any coefficient of the Maclaurin series can also be expressed in terms of mixed moments, although there are no concise formulae.
Indeed, as noted above, one can write it as a joint cumulant by repeating random variables appropriately, and then apply the above formula to express it in terms of mixed moments. For example
κ
201
(
X
,
Y
,
Z
)
=
κ
(
X
,
X
,
Z
)
=
E
(
X
2
Z
)
−
2
E
(
X
Z
)
E
(
X
)
−
E
(
X
2
)
E
(
Z
)
+
2
E
(
X
)
2
E
(
Z
)
.
{\displaystyle \kappa _{201}(X,Y,Z)=\kappa (X,X,Z)=\operatorname {E} (X^{2}Z)-2\operatorname {E} (XZ)\operatorname {E} (X)-\operatorname {E} (X^{2})\operatorname {E} (Z)+2\operatorname {E} (X)^{2}\operatorname {E} (Z).\,}
If some of the random variables are independent of all of the others, then any cumulant involving two (or more) independent random variables is zero.
The combinatorial meaning of the expression of mixed moments in terms of cumulants is easier to understand than that of cumulants in terms of mixed moments, see Equation (3.2.6) in:
E
(
X
1
⋯
X
n
)
=
∑
π
∏
B
∈
π
κ
(
X
i
:
i
∈
B
)
.
{\displaystyle \operatorname {E} (X_{1}\cdots X_{n})=\sum _{\pi }\prod _{B\in \pi }\kappa (X_{i}:i\in B).}
For example:
E
(
X
Y
Z
)
=
κ
(
X
,
Y
,
Z
)
+
κ
(
X
,
Y
)
κ
(
Z
)
+
κ
(
X
,
Z
)
κ
(
Y
)
+
κ
(
Y
,
Z
)
κ
(
X
)
+
κ
(
X
)
κ
(
Y
)
κ
(
Z
)
.
{\displaystyle \operatorname {E} (XYZ)=\kappa (X,Y,Z)+\kappa (X,Y)\kappa (Z)+\kappa (X,Z)\kappa (Y)+\kappa (Y,Z)\kappa (X)+\kappa (X)\kappa (Y)\kappa (Z).\,}
=== Further properties ===
Another important property of joint cumulants is multilinearity:
κ
(
X
+
Y
,
Z
1
,
Z
2
,
…
)
=
κ
(
X
,
Z
1
,
Z
2
,
…
)
+
κ
(
Y
,
Z
1
,
Z
2
,
…
)
.
{\displaystyle \kappa (X+Y,Z_{1},Z_{2},\dots )=\kappa (X,Z_{1},Z_{2},\ldots )+\kappa (Y,Z_{1},Z_{2},\ldots ).\,}
Just as the second cumulant is the variance, the joint cumulant of just two random variables is the covariance. The familiar identity
var
(
X
+
Y
)
=
var
(
X
)
+
2
cov
(
X
,
Y
)
+
var
(
Y
)
{\displaystyle \operatorname {var} (X+Y)=\operatorname {var} (X)+2\operatorname {cov} (X,Y)+\operatorname {var} (Y)\,}
generalizes to cumulants:
κ
n
(
X
+
Y
)
=
∑
j
=
0
n
(
n
j
)
κ
(
X
,
…
,
X
⏟
j
,
Y
,
…
,
Y
⏟
n
−
j
)
.
{\displaystyle \kappa _{n}(X+Y)=\sum _{j=0}^{n}{n \choose j}\kappa (\,\underbrace {X,\dots ,X} _{j},\underbrace {Y,\dots ,Y} _{n-j}\,).\,}
=== Conditional cumulants and the law of total cumulance ===
The law of total expectation and the law of total variance generalize naturally to conditional cumulants. The case n = 3, expressed in the language of (central) moments rather than that of cumulants, says
μ
3
(
X
)
=
E
(
μ
3
(
X
∣
Y
)
)
+
μ
3
(
E
(
X
∣
Y
)
)
+
3
cov
(
E
(
X
∣
Y
)
,
var
(
X
∣
Y
)
)
.
{\displaystyle \mu _{3}(X)=\operatorname {E} (\mu _{3}(X\mid Y))+\mu _{3}(\operatorname {E} (X\mid Y))+3\operatorname {cov} (\operatorname {E} (X\mid Y),\operatorname {var} (X\mid Y)).}
In general,
κ
(
X
1
,
…
,
X
n
)
=
∑
π
κ
(
κ
(
X
π
1
∣
Y
)
,
…
,
κ
(
X
π
b
∣
Y
)
)
{\displaystyle \kappa (X_{1},\dots ,X_{n})=\sum _{\pi }\kappa (\kappa (X_{\pi _{1}}\mid Y),\dots ,\kappa (X_{\pi _{b}}\mid Y))}
where
the sum is over all partitions π of the set {1, ..., n} of indices, and
π1, ..., πb are all of the "blocks" of the partition π; the expression κ(Xπm) indicates that the joint cumulant of the random variables whose indices are in that block of the partition.
=== Conditional cumulants and the conditional expectation ===
For certain settings, a derivative identity can be established between the conditional cumulant and the conditional expectation. For example, suppose that Y = X + Z where Z is standard normal independent of X, then for any X it holds that
κ
n
+
1
(
X
∣
Y
=
y
)
=
d
n
d
y
n
E
(
X
∣
Y
=
y
)
,
n
∈
N
,
y
∈
R
.
{\displaystyle \kappa _{n+1}(X\mid Y=y)={\frac {\mathrm {d} ^{n}}{\mathrm {d} y^{n}}}\operatorname {E} (X\mid Y=y),\,n\in \mathbb {N} ,\,y\in \mathbb {R} .}
The results can also be extended to the exponential family.
== Relation to statistical physics ==
In statistical physics many extensive quantities – that is quantities that are proportional to the volume or size of a given system – are related to cumulants of random variables. The deep connection is that in a large system an extensive quantity like the energy or number of particles can be thought of as the sum of (say) the energy associated with a number of nearly independent regions. The fact that the cumulants of these nearly independent random variables will (nearly) add make it reasonable that extensive quantities should be expected to be related to cumulants.
A system in equilibrium with a thermal bath at temperature T have a fluctuating internal energy E, which can be considered a random variable drawn from a distribution
E
∼
p
(
E
)
{\textstyle E\sim p(E)}
. The partition function of the system is
Z
(
β
)
=
∑
i
e
−
β
E
i
,
{\displaystyle Z(\beta )=\sum _{i}e^{-\beta E_{i}},}
where β = 1/(kT) and k is the Boltzmann constant and the notation
⟨
A
⟩
{\textstyle \langle A\rangle }
has been used rather than
E
[
A
]
{\textstyle \operatorname {E} [A]}
for the expectation value to avoid confusion with the energy, E. Hence the first and second cumulant for the energy E give the average energy and heat capacity.
⟨
E
⟩
c
=
∂
log
Z
∂
(
−
β
)
=
⟨
E
⟩
⟨
E
2
⟩
c
=
∂
⟨
E
⟩
c
∂
(
−
β
)
=
k
T
2
∂
⟨
E
⟩
∂
T
=
k
T
2
C
{\displaystyle {\begin{aligned}\langle E\rangle _{c}&={\frac {\partial \log Z}{\partial (-\beta )}}=\langle E\rangle \\[6pt]\langle E^{2}\rangle _{c}&={\frac {\partial \langle E\rangle _{c}}{\partial (-\beta )}}=kT^{2}{\frac {\partial \langle E\rangle }{\partial T}}=kT^{2}C\end{aligned}}}
The Helmholtz free energy expressed in terms of
F
(
β
)
=
−
β
−
1
log
Z
(
β
)
{\displaystyle F(\beta )=-\beta ^{-1}\log Z(\beta )\,}
further connects thermodynamic quantities with cumulant generating function for the energy. Thermodynamics properties that are derivatives of the free energy, such as its internal energy, entropy, and specific heat capacity, all can be readily expressed in terms of these cumulants. Other free energy can be a function of other variables such as the magnetic field or chemical potential
μ
{\textstyle \mu }
, e.g.
Ω
=
−
β
−
1
log
(
⟨
exp
(
−
β
E
−
β
μ
N
)
⟩
)
,
{\displaystyle \Omega =-\beta ^{-1}\log(\langle \exp(-\beta E-\beta \mu N)\rangle ),\,}
where N is the number of particles and
Ω
{\textstyle \Omega }
is the grand potential. Again the close relationship between the definition of the free energy and the cumulant generating function implies that various derivatives of this free energy can be written in terms of joint cumulants of E and N.
== History ==
The history of cumulants is discussed by Anders Hald.
Cumulants were first introduced by Thorvald N. Thiele, in 1889, who called them semi-invariants. They were first called cumulants in a 1932 paper by Ronald Fisher and John Wishart. Fisher was publicly reminded of Thiele's work by Neyman, who also notes previous published citations of Thiele brought to Fisher's attention. Stephen Stigler has said that the name cumulant was suggested to Fisher in a letter from Harold Hotelling. In a paper published in 1929, Fisher had called them cumulative moment functions.
The partition function in statistical physics was introduced by Josiah Willard Gibbs in 1901. The free energy is often called Gibbs free energy. In statistical mechanics, cumulants are also known as Ursell functions relating to a publication in 1927.
== Cumulants in generalized settings ==
=== Formal cumulants ===
More generally, the cumulants of a sequence { mn : n = 1, 2, 3, ... }, not necessarily the moments of any probability distribution, are, by definition,
1
+
∑
n
=
1
∞
m
n
t
n
n
!
=
exp
(
∑
n
=
1
∞
κ
n
t
n
n
!
)
,
{\displaystyle 1+\sum _{n=1}^{\infty }{\frac {m_{n}t^{n}}{n!}}=\exp \left(\sum _{n=1}^{\infty }{\frac {\kappa _{n}t^{n}}{n!}}\right),}
where the values of κn for n = 1, 2, 3, ... are found formally, i.e., by algebra alone, in disregard of questions of whether any series converges. All of the difficulties of the "problem of cumulants" are absent when one works formally. The simplest example is that the second cumulant of a probability distribution must always be nonnegative, and is zero only if all of the higher cumulants are zero. Formal cumulants are subject to no such constraints.
=== Bell numbers ===
In combinatorics, the nth Bell number is the number of partitions of a set of size n. All of the cumulants of the sequence of Bell numbers are equal to 1. The Bell numbers are the moments of the Poisson distribution with expected value 1.
=== Cumulants of a polynomial sequence of binomial type ===
For any sequence { κn : n = 1, 2, 3, ... } of scalars in a field of characteristic zero, being considered formal cumulants, there is a corresponding sequence { μ′ : n = 1, 2, 3, ... } of formal moments, given by the polynomials above. For those polynomials, construct a polynomial sequence in the following way. Out of the polynomial
μ
6
′
=
κ
6
+
6
κ
5
κ
1
+
15
κ
4
κ
2
+
15
κ
4
κ
1
2
+
10
κ
3
2
+
60
κ
3
κ
2
κ
1
+
20
κ
3
κ
1
3
+
15
κ
2
3
+
45
κ
2
2
κ
1
2
+
15
κ
2
κ
1
4
+
κ
1
6
{\displaystyle {\begin{aligned}\mu '_{6}={}&\kappa _{6}+6\kappa _{5}\kappa _{1}+15\kappa _{4}\kappa _{2}+15\kappa _{4}\kappa _{1}^{2}+10\kappa _{3}^{2}+60\kappa _{3}\kappa _{2}\kappa _{1}+20\kappa _{3}\kappa _{1}^{3}\\&{}+15\kappa _{2}^{3}+45\kappa _{2}^{2}\kappa _{1}^{2}+15\kappa _{2}\kappa _{1}^{4}+\kappa _{1}^{6}\end{aligned}}}
make a new polynomial in these plus one additional variable x:
p
6
(
x
)
=
κ
6
x
+
(
6
κ
5
κ
1
+
15
κ
4
κ
2
+
10
κ
3
2
)
x
2
+
(
15
κ
4
κ
1
2
+
60
κ
3
κ
2
κ
1
+
15
κ
2
3
)
x
3
+
(
45
κ
2
2
κ
1
2
)
x
4
+
(
15
κ
2
κ
1
4
)
x
5
+
(
κ
1
6
)
x
6
,
{\displaystyle {\begin{aligned}p_{6}(x)={}&\kappa _{6}\,x+(6\kappa _{5}\kappa _{1}+15\kappa _{4}\kappa _{2}+10\kappa _{3}^{2})\,x^{2}+(15\kappa _{4}\kappa _{1}^{2}+60\kappa _{3}\kappa _{2}\kappa _{1}+15\kappa _{2}^{3})\,x^{3}\\&{}+(45\kappa _{2}^{2}\kappa _{1}^{2})\,x^{4}+(15\kappa _{2}\kappa _{1}^{4})\,x^{5}+(\kappa _{1}^{6})\,x^{6},\end{aligned}}}
and then generalize the pattern. The pattern is that the numbers of blocks in the aforementioned partitions are the exponents on x. Each coefficient is a polynomial in the cumulants; these are the Bell polynomials, named after Eric Temple Bell.
This sequence of polynomials is of binomial type. In fact, no other sequences of binomial type exist; every polynomial sequence of binomial type is completely determined by its sequence of formal cumulants.
=== Free cumulants ===
In the above moment-cumulant formula\
E
(
X
1
⋯
X
n
)
=
∑
π
∏
B
∈
π
κ
(
X
i
:
i
∈
B
)
{\displaystyle \operatorname {E} (X_{1}\cdots X_{n})=\sum _{\pi }\prod _{B\,\in \,\pi }\kappa (X_{i}:i\in B)}
for joint cumulants, one sums over all partitions of the set { 1, ..., n }. If instead, one sums only over the noncrossing partitions, then, by solving these formulae for the
κ
{\textstyle \kappa }
in terms of the moments, one gets free cumulants rather than conventional cumulants treated above. These free cumulants were introduced by Roland Speicher and play a central role in free probability theory. In that theory, rather than considering independence of random variables, defined in terms of tensor products of algebras of random variables, one considers instead free independence of random variables, defined in terms of free products of algebras.
The ordinary cumulants of degree higher than 2 of the normal distribution are zero. The free cumulants of degree higher than 2 of the Wigner semicircle distribution are zero. This is one respect in which the role of the Wigner distribution in free probability theory is analogous to that of the normal distribution in conventional probability theory.
== See also ==
Entropic value at risk
Cumulant generating function from a multiset
Cornish–Fisher expansion
Edgeworth expansion
Polykay
k-statistic, a minimum-variance unbiased estimator of a cumulant
Ursell function
Total position spread tensor as an application of cumulants to analyse the electronic wave function in quantum chemistry.
== References ==
== External links ==
Weisstein, Eric W. "Cumulant". MathWorld.
cumulant on the Earliest known uses of some of the words of mathematics | Wikipedia/Logmoment_generating_function |
A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random determined by the random bits; thus either the running time, or the output (or both) are random variables.
There is a distinction between algorithms that use the random input so that they always terminate with the correct answer, but where the expected running time is finite (Las Vegas algorithms, for example Quicksort), and algorithms which have a chance of producing an incorrect result (Monte Carlo algorithms, for example the Monte Carlo algorithm for the MFAS problem) or fail to produce a result either by signaling a failure or failing to terminate. In some cases, probabilistic algorithms are the only practical means of solving a problem.
In common practice, randomized algorithms are approximated using a pseudorandom number generator in place of a true source of random bits; such an implementation may deviate from the expected theoretical behavior and mathematical guarantees which may depend on the existence of an ideal true random number generator.
== Motivation ==
As a motivating example, consider the problem of finding an ‘a’ in an array of n elements.
Input: An array of n≥2 elements, in which half are ‘a’s and the other half are ‘b’s.
Output: Find an ‘a’ in the array.
We give two versions of the algorithm, one Las Vegas algorithm and one Monte Carlo algorithm.
Las Vegas algorithm:
This algorithm succeeds with probability 1. The number of iterations varies and can be arbitrarily large, but the expected number of iterations is
lim
n
→
∞
∑
i
=
1
n
i
2
i
=
2
{\displaystyle \lim _{n\to \infty }\sum _{i=1}^{n}{\frac {i}{2^{i}}}=2}
Since it is constant, the expected run time over many calls is
Θ
(
1
)
{\displaystyle \Theta (1)}
. (See Big Theta notation)
Monte Carlo algorithm:
If an ‘a’ is found, the algorithm succeeds, else the algorithm fails. After k iterations, the probability of finding an ‘a’ is:
This algorithm does not guarantee success, but the run time is bounded. The number of iterations is always less than or equal to k. Taking k to be constant the run time (expected and absolute) is
Θ
(
1
)
{\displaystyle \Theta (1)}
.
Randomized algorithms are particularly useful when faced with a malicious "adversary" or attacker who deliberately tries to feed a bad input to the algorithm (see worst-case complexity and competitive analysis (online algorithm)) such as in the Prisoner's dilemma. It is for this reason that randomness is ubiquitous in cryptography. In cryptographic applications, pseudo-random numbers cannot be used, since the adversary can predict them, making the algorithm effectively deterministic. Therefore, either a source of truly random numbers or a cryptographically secure pseudo-random number generator is required. Another area in which randomness is inherent is quantum computing.
In the example above, the Las Vegas algorithm always outputs the correct answer, but its running time is a random variable. The Monte Carlo algorithm (related to the Monte Carlo method for simulation) is guaranteed to complete in an amount of time that can be bounded by a function the input size and its parameter k, but allows a small probability of error. Observe that any Las Vegas algorithm can be converted into a Monte Carlo algorithm (via Markov's inequality), by having it output an arbitrary, possibly incorrect answer if it fails to complete within a specified time. Conversely, if an efficient verification procedure exists to check whether an answer is correct, then a Monte Carlo algorithm can be converted into a Las Vegas algorithm by running the Monte Carlo algorithm repeatedly till a correct answer is obtained.
== Computational complexity ==
Computational complexity theory models randomized algorithms as probabilistic Turing machines. Both Las Vegas and Monte Carlo algorithms are considered, and several complexity classes are studied. The most basic randomized complexity class is RP, which is the class of decision problems for which there is an efficient (polynomial time) randomized algorithm (or probabilistic Turing machine) which recognizes NO-instances with absolute certainty and recognizes YES-instances with a probability of at least 1/2. The complement class for RP is co-RP. Problem classes having (possibly nonterminating) algorithms with polynomial time average case running time whose output is always correct are said to be in ZPP.
The class of problems for which both YES and NO-instances are allowed to be identified with some error is called BPP. This class acts as the randomized equivalent of P, i.e. BPP represents the class of efficient randomized algorithms.
== Early history ==
=== Sorting ===
Quicksort was discovered by Tony Hoare in 1959, and subsequently published in 1961. In the same year, Hoare published the quickselect algorithm, which finds the median element of a list in linear expected time. It remained open until 1973 whether a deterministic linear-time algorithm existed.
=== Number theory ===
In 1917, Henry Cabourn Pocklington introduced a randomized algorithm known as Pocklington's algorithm for efficiently finding square roots modulo prime numbers.
In 1970, Elwyn Berlekamp introduced a randomized algorithm for efficiently computing the roots of a polynomial over a finite field. In 1977, Robert M. Solovay and Volker Strassen discovered a polynomial-time randomized primality test (i.e., determining the primality of a number). Soon afterwards Michael O. Rabin demonstrated that the 1976 Miller's primality test could also be turned into a polynomial-time randomized algorithm. At that time, no provably polynomial-time deterministic algorithms for primality testing were known.
=== Data structures ===
One of the earliest randomized data structures is the hash table, which was introduced in 1953 by Hans Peter Luhn at IBM. Luhn's hash table used chaining to resolve collisions and was also one of the first applications of linked lists. Subsequently, in 1954, Gene Amdahl, Elaine M. McGraw, Nathaniel Rochester, and Arthur Samuel of IBM Research introduced linear probing, although Andrey Ershov independently had the same idea in 1957. In 1962, Donald Knuth performed the first correct analysis of linear probing, although the memorandum containing his analysis was not published until much later. The first published analysis was due to Konheim and Weiss in 1966.
Early works on hash tables either assumed access to a fully random hash function or assumed that the keys themselves were random. In 1979, Carter and Wegman introduced universal hash functions, which they showed could be used to implement chained hash tables with constant expected time per operation.
Early work on randomized data structures also extended beyond hash tables. In 1970, Burton Howard Bloom introduced an approximate-membership data structure known as the Bloom filter. In 1989, Raimund Seidel and Cecilia R. Aragon introduced a randomized balanced search tree known as the treap. In the same year, William Pugh introduced another randomized search tree known as the skip list.
=== Implicit uses in combinatorics ===
Prior to the popularization of randomized algorithms in computer science, Paul Erdős popularized the use of randomized constructions as a mathematical technique for establishing the existence of mathematical objects. This technique has become known as the probabilistic method. Erdős gave his first application of the probabilistic method in 1947, when he used a simple randomized construction to establish the existence of Ramsey graphs. He famously used a more sophisticated randomized algorithm in 1959 to establish the existence of graphs with high girth and chromatic number.
== Examples ==
=== Quicksort ===
Quicksort is a familiar, commonly used algorithm in which randomness can be useful. Many deterministic versions of this algorithm require O(n2) time to sort n numbers for some well-defined class of degenerate inputs (such as an already sorted array), with the specific class of inputs that generate this behavior defined by the protocol for pivot selection. However, if the algorithm selects pivot elements uniformly at random, it has a provably high probability of finishing in O(n log n) time regardless of the characteristics of the input.
=== Randomized incremental constructions in geometry ===
In computational geometry, a standard technique to build a structure like a convex hull or Delaunay triangulation is to randomly permute the input points and then insert them one by one into the existing structure. The randomization ensures that the expected number of changes to the structure caused by an insertion is small, and so the expected running time of the algorithm can be bounded from above. This technique is known as randomized incremental construction.
=== Min cut ===
Input: A graph G(V,E)
Output: A cut partitioning the vertices into L and R, with the minimum number of edges between L and R.
Recall that the contraction of two nodes, u and v, in a (multi-)graph yields a new node u ' with edges that are the union of the edges incident on either u or v, except from any edge(s) connecting u and v. Figure 1 gives an example of contraction of vertex A and B.
After contraction, the resulting graph may have parallel edges, but contains no self loops.
Karger's basic algorithm:
begin
i = 1
repeat
repeat
Take a random edge (u,v) ∈ E in G
replace u and v with the contraction u'
until only 2 nodes remain
obtain the corresponding cut result Ci
i = i + 1
until i = m
output the minimum cut among C1, C2, ..., Cm.
end
In each execution of the outer loop, the algorithm repeats the inner loop until only 2 nodes remain, the corresponding cut is obtained. The run time of one execution is
O
(
n
)
{\displaystyle O(n)}
, and n denotes the number of vertices.
After m times executions of the outer loop, we output the minimum cut among all the results. The figure 2 gives an
example of one execution of the algorithm. After execution, we get a cut of size 3.
==== Analysis of algorithm ====
The probability that the algorithm succeeds is 1 − the probability that all attempts fail. By independence, the probability that all attempts fail is
∏
i
=
1
m
Pr
(
C
i
≠
C
)
=
∏
i
=
1
m
(
1
−
Pr
(
C
i
=
C
)
)
.
{\displaystyle \prod _{i=1}^{m}\Pr(C_{i}\neq C)=\prod _{i=1}^{m}(1-\Pr(C_{i}=C)).}
By lemma 1, the probability that Ci = C is the probability that no edge of C is selected during iteration i. Consider the inner loop and let Gj denote the graph after j edge contractions, where j ∈ {0, 1, …, n − 3}. Gj has n − j vertices. We use the chain rule of conditional possibilities.
The probability that the edge chosen at iteration j is not in C, given that no edge of C has been chosen before, is
1
−
k
|
E
(
G
j
)
|
{\displaystyle 1-{\frac {k}{|E(G_{j})|}}}
. Note that Gj still has min cut of size k, so by Lemma 2, it still has at least
(
n
−
j
)
k
2
{\displaystyle {\frac {(n-j)k}{2}}}
edges.
Thus,
1
−
k
|
E
(
G
j
)
|
≥
1
−
2
n
−
j
=
n
−
j
−
2
n
−
j
{\displaystyle 1-{\frac {k}{|E(G_{j})|}}\geq 1-{\frac {2}{n-j}}={\frac {n-j-2}{n-j}}}
.
So by the chain rule, the probability of finding the min cut C is
Pr
[
C
i
=
C
]
≥
(
n
−
2
n
)
(
n
−
3
n
−
1
)
(
n
−
4
n
−
2
)
…
(
3
5
)
(
2
4
)
(
1
3
)
.
{\displaystyle \Pr[C_{i}=C]\geq \left({\frac {n-2}{n}}\right)\left({\frac {n-3}{n-1}}\right)\left({\frac {n-4}{n-2}}\right)\ldots \left({\frac {3}{5}}\right)\left({\frac {2}{4}}\right)\left({\frac {1}{3}}\right).}
Cancellation gives
Pr
[
C
i
=
C
]
≥
2
n
(
n
−
1
)
{\displaystyle \Pr[C_{i}=C]\geq {\frac {2}{n(n-1)}}}
. Thus the probability that the algorithm succeeds is at least
1
−
(
1
−
2
n
(
n
−
1
)
)
m
{\displaystyle 1-\left(1-{\frac {2}{n(n-1)}}\right)^{m}}
. For
m
=
n
(
n
−
1
)
2
ln
n
{\displaystyle m={\frac {n(n-1)}{2}}\ln n}
, this is equivalent to
1
−
1
n
{\displaystyle 1-{\frac {1}{n}}}
. The algorithm finds the min cut with probability
1
−
1
n
{\displaystyle 1-{\frac {1}{n}}}
, in time
O
(
m
n
)
=
O
(
n
3
log
n
)
{\displaystyle O(mn)=O(n^{3}\log n)}
.
== Derandomization ==
Randomness can be viewed as a resource, like space and time. Derandomization is then the process of removing randomness (or using as little of it as possible). It is not currently known if all algorithms can be derandomized without significantly increasing their running time. For instance, in computational complexity, it is unknown whether P = BPP, i.e., we do not know whether we can take an arbitrary randomized algorithm that runs in polynomial time with a small error probability and derandomize it to run in polynomial time without using randomness.
There are specific methods that can be employed to derandomize particular randomized algorithms:
the method of conditional probabilities, and its generalization, pessimistic estimators
discrepancy theory (which is used to derandomize geometric algorithms)
the exploitation of limited independence in the random variables used by the algorithm, such as the pairwise independence used in universal hashing
the use of expander graphs (or dispersers in general) to amplify a limited amount of initial randomness (this last approach is also referred to as generating pseudorandom bits from a random source, and leads to the related topic of pseudorandomness)
changing the randomized algorithm to use a hash function as a source of randomness for the algorithm's tasks, and then derandomizing the algorithm by brute-forcing all possible parameters (seeds) of the hash function. This technique is usually used to exhaustively search a sample space and making the algorithm deterministic (e.g. randomized graph algorithms)
== Where randomness helps ==
When the model of computation is restricted to Turing machines, it is currently an open question whether the ability to make random choices allows some problems to be solved in polynomial time that cannot be solved in polynomial time without this ability; this is the question of whether P = BPP. However, in other contexts, there are specific examples of problems where randomization yields strict improvements.
Based on the initial motivating example: given an exponentially long string of 2k characters, half a's and half b's, a random-access machine requires 2k−1 lookups in the worst-case to find the index of an a; if it is permitted to make random choices, it can solve this problem in an expected polynomial number of lookups.
The natural way of carrying out a numerical computation in embedded systems or cyber-physical systems is to provide a result that approximates the correct one with high probability (or Probably Approximately Correct Computation (PACC)). The hard problem associated with the evaluation of the discrepancy loss between the approximated and the correct computation can be effectively addressed by resorting to randomization
In communication complexity, the equality of two strings can be verified to some reliability using
log
n
{\displaystyle \log n}
bits of communication with a randomized protocol. Any deterministic protocol requires
Θ
(
n
)
{\displaystyle \Theta (n)}
bits if defending against a strong opponent.
The volume of a convex body can be estimated by a randomized algorithm to arbitrary precision in polynomial time. Bárány and Füredi showed that no deterministic algorithm can do the same. This is true unconditionally, i.e. without relying on any complexity-theoretic assumptions, assuming the convex body can be queried only as a black box.
A more complexity-theoretic example of a place where randomness appears to help is the class IP. IP consists of all languages that can be accepted (with high probability) by a polynomially long interaction between an all-powerful prover and a verifier that implements a BPP algorithm. IP = PSPACE. However, if it is required that the verifier be deterministic, then IP = NP.
In a chemical reaction network (a finite set of reactions like A+B → 2C + D operating on a finite number of molecules), the ability to ever reach a given target state from an initial state is decidable, while even approximating the probability of ever reaching a given target state (using the standard concentration-based probability for which reaction will occur next) is undecidable. More specifically, a limited Turing machine can be simulated with arbitrarily high probability of running correctly for all time, only if a random chemical reaction network is used. With a simple nondeterministic chemical reaction network (any possible reaction can happen next), the computational power is limited to primitive recursive functions.
== See also ==
Approximate counting algorithm
Atlantic City algorithm
Bogosort
Count–min sketch
HyperLogLog
Karger's algorithm
Las Vegas algorithm
Monte Carlo algorithm
Principle of deferred decision
Probabilistic analysis of algorithms
Probabilistic roadmap
Randomized algorithms as zero-sum games
== Notes ==
== References ==
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw–Hill, 1990. ISBN 0-262-03293-7. Chapter 5: Probabilistic Analysis and Randomized Algorithms, pp. 91–122.
Dirk Draheim. "Semantics of the Probabilistic Typed Lambda Calculus (Markov Chain Semantics, Termination Behavior, and Denotational Semantics)." Springer, 2017.
Jon Kleinberg and Éva Tardos. Algorithm Design. Chapter 13: "Randomized algorithms".
Fallis, D. (2000). "The reliability of randomized algorithms". The British Journal for the Philosophy of Science. 51 (2): 255–271. doi:10.1093/bjps/51.2.255.
M. Mitzenmacher and E. Upfal. Probability and Computing: Randomized Algorithms and Probabilistic Analysis. Cambridge University Press, New York (NY), 2005.
Rajeev Motwani and P. Raghavan. Randomized Algorithms. Cambridge University Press, New York (NY), 1995.
Rajeev Motwani and P. Raghavan. Randomized Algorithms. A survey on Randomized Algorithms.
Christos Papadimitriou (1993), Computational Complexity (1st ed.), Addison Wesley, ISBN 978-0-201-53082-7 Chapter 11: Randomized computation, pp. 241–278.
Rabin, Michael O. (1980). "Probabilistic algorithm for testing primality". Journal of Number Theory. 12: 128–138. doi:10.1016/0022-314X(80)90084-0.
A. A. Tsay, W. S. Lovejoy, David R. Karger, Random Sampling in Cut, Flow, and Network Design Problems, Mathematics of Operations Research, 24(2):383–413, 1999.
"Randomized Algorithms for Scientific Computing" (RASC), OSTI.GOV (July 10th, 2021). | Wikipedia/Probabilistic_algorithm |
In queueing theory, a discipline within the mathematical theory of probability, the M/M/c queue (or Erlang–C model: 495 ) is a multi-server queueing model. In Kendall's notation it describes a system where arrivals form a single queue and are governed by a Poisson process, there are c servers, and job service times are exponentially distributed. It is a generalisation of the M/M/1 queue which considers only a single server. The model with infinitely many servers is the M/M/∞ queue.
== Model definition ==
An M/M/c queue is a stochastic process whose state space is the set {0, 1, 2, 3, ...} where the value corresponds to the number of customers in the system, including any currently in service.
Arrivals occur at rate λ according to a Poisson process and move the process from state i to i+1.
Service times have an exponential distribution with parameter μ. If there are fewer than c jobs, some of the servers will be idle. If there are more than c jobs, the jobs queue in a buffer.
The buffer is of infinite size, so there is no limit on the number of customers it can contain.
The model can be described as a continuous time Markov chain with transition rate matrix
on the state space {0, 1, 2, 3, ...}. The model is a type of birth–death process. We write ρ = λ/(c μ) for the server utilization and require ρ < 1 for the queue to be stable. ρ represents the average proportion of time which each of the servers is occupied (assuming jobs finding more than one vacant server choose their servers randomly).
The state space diagram for this chain is as below.
== Stationary analysis ==
=== Number of customers in the system ===
If the traffic intensity is greater than one then the queue will grow without bound but if server utilization
ρ
=
λ
c
μ
<
1
{\displaystyle \rho ={\frac {\lambda }{c\mu }}<1}
then the system has a stationary distribution with probability mass function
where πk is the probability that the system contains k customers.
The probability that an arriving customer is forced to join the queue (all servers are occupied) is given by
which is referred to as Erlang's C formula and is often denoted C(c, λ/μ) or E2,c(λ/μ). The average number of customers in the system (in service and in the queue) is given by
=== Busy period of server ===
The busy period of the M/M/c queue can either refer to:
full busy period: the time period between an arrival which finds c−1 customers in the system until a departure which leaves the system with c−1 customers
partial busy period: the time period between an arrival which finds the system empty until a departure which leaves the system again empty.
Write Tk = min( t: k jobs in the system at time 0+ and k − 1 jobs in the system at time t) and ηk(s) for the Laplace–Stieltjes transform of the distribution of Tk. Then
For k > c, Tk has the same distribution as Tc.
For k = c,
For k < c,
=== Response time ===
The response time is the total amount of time a customer spends in both the queue and in service. The average response time is the same for all work conserving service disciplines and is
==== Customers in first-come, first-served discipline ====
The customer either experiences an immediate exponential service, or must wait for k customers to be served before their own service, thus experiencing an Erlang distribution with shape parameter k + 1.
==== Customers in processor sharing discipline ====
In a processor sharing queue the service capacity of the queue is split equally between the jobs in the queue. In the M/M/c queue this means that when there are c or fewer jobs in the system, each job is serviced at rate μ. However, when there are more than c jobs in the system the service rate of each job decreases and is
c
μ
n
{\displaystyle {\frac {c\mu }{n}}}
where n is the number of jobs in the system. This means that arrivals after a job of interest can impact the service time of the job of interest. The Laplace–Stieltjes transform of the response time distribution has been shown to be a solution to a Volterra integral equation from which moments can be computed. An approximation has been offered for the response time distribution.
== Finite capacity ==
In an M/M/c/K queue only K customers can queue at any one time (including those in service). Any further arrivals to the queue are considered "lost". We assume that K ≥ c. The model has transition rate matrix
on the state space {0, 1, 2, ..., c, ..., K}. In the case where c = K, the M/M/c/c queue is also known as the Erlang–B model.: 495
=== Transient analysis ===
See Takács for a transient solution and Stadje for busy period results.
=== Stationary analysis ===
Stationary probabilities are given by
The average number of customers in the system is
and the average time in the system for a customer is
The average time in the queue for a customer is
The average number of customers in the queue can be obtained by using the effective arrival rate. The effective arrival rate is calculated by
Thus we can obtain the average number of customers in the queue by
An implementation of the above calculations in Python can be found.
== Heavy-traffic limits ==
Writing X(t) for the number of customers in the system at time t, it can be shown that under three different conditions the process
converges to a diffusion process.: 490
Fix μ and c, increase λ and scale by n = 1/(1 − ρ)2.
Fix μ and ρ, increase λ and c, and scale by n = c.
Fix as a constant β where
and increase λ and c using the scale n = c or n = 1/(1 − ρ)2. This case is called the Halfin–Whitt regime.
== See also ==
Spectral expansion solution
M/G/k queue
== References == | Wikipedia/M/M/c_model |
A Markov logic network (MLN) is a probabilistic logic which applies the ideas of a Markov network to first-order logic, defining probability distributions on possible worlds on any given domain.
== History ==
In 2002, Ben Taskar, Pieter Abbeel and Daphne Koller introduced relational Markov networks as templates to specify Markov networks abstractly and without reference to a specific domain. Work on Markov logic networks began in 2003 by Pedro Domingos and Matt Richardson. Markov logic networks is a popular formalism for statistical relational learning.
== Syntax ==
A Markov logic network consists of a collection of formulas from first-order logic, to each of which is assigned a real number, the weight. The underlying idea is that an interpretation is more likely if it satisfies formulas with positive weights and less likely if it satisfies formulas with negative weights.
For instance, the following Markov logic network codifies how smokers are more likely to be friends with other smokers, and how stress encourages smoking:
2.0
::
s
m
o
k
e
s
(
X
)
←
s
m
o
k
e
s
(
Y
)
∧
i
n
f
l
u
e
n
c
e
s
(
X
,
Y
)
0.5
::
s
m
o
k
e
s
(
X
)
←
s
t
r
e
s
s
(
X
)
{\displaystyle {\begin{array}{lcl}2.0&::&\mathrm {smokes} (X)\leftarrow \mathrm {smokes} (Y)\land \mathrm {influences} (X,Y)\\0.5&::&\mathrm {smokes} (X)\leftarrow \mathrm {stress} (X)\end{array}}}
== Semantics ==
Together with a given domain, a Markov logic network defines a probability distribution on the set of all interpretations of its predicates on the given domain. The underlying idea is that an interpretation is more likely if it satisfies formulas with positive weights and less likely if it satisfies formulas with negative weights.
For any
n
{\displaystyle n}
-ary predicate symbol
R
{\displaystyle R}
that occurs in the Markov logic network and every
n
{\displaystyle n}
-tuple
a
1
,
…
,
a
n
{\displaystyle a_{1},\dots ,a_{n}}
of domain elements,
R
(
a
1
,
…
,
a
n
)
{\displaystyle R(a_{1},\dots ,a_{n})}
is a grounding of
R
{\displaystyle R}
. An interpretation is given by allocating a Boolean truth value (true or false) to each grounding of an element. A true grounding of a formula
φ
{\displaystyle \varphi }
in an interpretation with free variables
x
1
,
…
,
x
n
{\displaystyle x_{1},\dots ,x_{n}}
is a variable assignment of
x
1
,
…
,
x
n
{\displaystyle x_{1},\dots ,x_{n}}
that makes
φ
{\displaystyle \varphi }
true in that interpretation.
Then the probability of any given interpretation is directly proportional to
exp
(
∑
j
w
j
n
j
)
{\displaystyle \exp(\sum _{j}w_{j}n_{j})}
, where
w
j
{\displaystyle w_{j}}
is the weight of the
j
{\displaystyle j}
-th sentence of the Markov logic network and
n
j
{\displaystyle n_{j}}
is the number of its true groundings.
This can also be seen as inducing a Markov network whose nodes are the groundings of the predicates occurring in the Markov logic network. The feature functions of this network are the groundings of the sentences occurring in the Markov logic network, with value
e
w
{\displaystyle e^{w}}
if the grounding is true and 1 otherwise (where again
w
{\displaystyle w}
is the weight of the formula).
== Inference ==
The probability distributions induced by Markov logic networks can be queried for the probability of a particular event, given by an atomic formula (marginal inference), possibly conditioned by another atomic formula.
Marginal inference can be performed using standard Markov network inference techniques over the minimal subset of the relevant Markov network required for answering the query. Exact inference is known to be #P-complete in the size of the domain.
In practice, the exact probability is often approximated. Techniques for approximate inference include Gibbs sampling, belief propagation, or approximation via pseudolikelihood.
The class of Markov logic networks which use only two variables in any formula allows for polynomial time exact inference by reduction to weighted model counting.
== See also ==
Markov random field
Statistical relational learning
Probabilistic logic network
Probabilistic soft logic
ProbLog
== Resources ==
== External links ==
University of Washington Statistical Relational Learning group
Alchemy 2.0: Markov logic networks in C++
pracmln: Markov logic networks in Python
ProbCog: Markov logic networks in Python and Java that can use its own inference engine or Alchemy's
markov thebeast: Markov logic networks in Java
RockIt: Markov logic networks in Java (with web interface/REST API)
Tuffy: A Learning and Inference Engine with strong RDBMs-based optimization for scalability
Felix: A successor to Tuffy, with prebuilt submodules to speed up common subtasks
Factorie: Scala based probabilistic inference language, with prebuilt submodules for natural language processing etc
Figaro: Scala based MLN language
LoMRF: Logical Markov Random Fields, an open-source implementation of Markov Logic Networks in Scala | Wikipedia/Markov_logic_network |
In probability theory and statistics, the covariance function describes how much two random variables change together (their covariance) with varying spatial or temporal separation. For a random field or stochastic process Z(x) on a domain D, a covariance function C(x, y) gives the covariance of the values of the random field at the two locations x and y:
C
(
x
,
y
)
:=
cov
(
Z
(
x
)
,
Z
(
y
)
)
=
E
[
(
Z
(
x
)
−
E
[
Z
(
x
)
]
)
(
Z
(
y
)
−
E
[
Z
(
y
)
]
)
]
.
{\displaystyle C(x,y):=\operatorname {cov} (Z(x),Z(y))=\mathbb {E} {\Big [}{\big (}Z(x)-\mathbb {E} [Z(x)]{\big )}{\big (}Z(y)-\mathbb {E} [Z(y)]{\big )}{\Big ]}.\,}
The same C(x, y) is called the autocovariance function in two instances: in time series (to denote exactly the same concept except that x and y refer to locations in time rather than in space), and in multivariate random fields (to refer to the covariance of a variable with itself, as opposed to the cross covariance between two different variables at different locations, Cov(Z(x1), Y(x2))).
== Admissibility ==
For locations x1, x2, ..., xN ∈ D the variance of every linear combination
X
=
∑
i
=
1
N
w
i
Z
(
x
i
)
{\displaystyle X=\sum _{i=1}^{N}w_{i}Z(x_{i})}
can be computed as
var
(
X
)
=
∑
i
=
1
N
∑
j
=
1
N
w
i
C
(
x
i
,
x
j
)
w
j
.
{\displaystyle \operatorname {var} (X)=\sum _{i=1}^{N}\sum _{j=1}^{N}w_{i}C(x_{i},x_{j})w_{j}.}
A function is a valid covariance function if and only if this variance is non-negative for all possible choices of N and weights w1, ..., wN. A function with this property is called positive semidefinite.
== Simplifications with stationarity ==
In case of a weakly stationary random field, where
C
(
x
i
,
x
j
)
=
C
(
x
i
+
h
,
x
j
+
h
)
{\displaystyle C(x_{i},x_{j})=C(x_{i}+h,x_{j}+h)\,}
for any lag h, the covariance function can be represented by a one-parameter function
C
s
(
h
)
=
C
(
0
,
h
)
=
C
(
x
,
x
+
h
)
{\displaystyle C_{s}(h)=C(0,h)=C(x,x+h)\,}
which is called a covariogram and also a covariance function. Implicitly the C(xi, xj) can be computed from Cs(h) by:
C
(
x
,
y
)
=
C
s
(
y
−
x
)
.
{\displaystyle C(x,y)=C_{s}(y-x).\,}
The positive definiteness of this single-argument version of the covariance function can be checked by Bochner's theorem.
== Parametric families of covariance functions ==
For a given variance
σ
2
{\displaystyle \sigma ^{2}}
, a simple stationary parametric covariance function is the "exponential covariance function"
C
(
d
)
=
σ
2
exp
(
−
d
/
V
)
{\displaystyle C(d)=\sigma ^{2}\exp(-d/V)}
where V is a scaling parameter (correlation length), and d = d(x,y) is the distance between two points. Sample paths of a Gaussian process with the exponential covariance function are not smooth. The "squared exponential" (or "Gaussian") covariance function:
C
(
d
)
=
σ
2
exp
(
−
(
d
/
V
)
2
)
{\displaystyle C(d)=\sigma ^{2}\exp(-(d/V)^{2})}
is a stationary covariance function with smooth sample paths.
The Matérn covariance function and rational quadratic covariance function are two parametric families of stationary covariance functions. The Matérn family includes the exponential and squared exponential covariance functions as special cases.
== See also ==
Autocorrelation function
Correlation function
Covariance matrix
Covariance operator – Operator in probability theory
Kriging
Positive-definite kernel
Random field
Stochastic process
Variogram
== References == | Wikipedia/Covariance_function |
In statistics, the algebra of random variables provides rules for the symbolic manipulation of random variables, while avoiding delving too deeply into the mathematically sophisticated ideas of probability theory. Its symbolism allows the treatment of sums, products, ratios and general functions of random variables, as well as dealing with operations such as finding the probability distributions and the expectations (or expected values), variances and covariances of such combinations.
In principle, the elementary algebra of random variables is equivalent to that of conventional non-random (or deterministic) variables. However, the changes occurring on the probability distribution of a random variable obtained after performing algebraic operations are not straightforward. Therefore, the behavior of the different operators of the probability distribution, such as expected values, variances, covariances, and moments, may be different from that observed for the random variable using symbolic algebra. It is possible to identify some key rules for each of those operators, resulting in different types of algebra for random variables, apart from the elementary symbolic algebra: Expectation algebra, Variance algebra, Covariance algebra, Moment algebra, etc.
== Elementary symbolic algebra of random variables ==
Considering two random variables
X
{\displaystyle X}
and
Y
{\displaystyle Y}
, the following algebraic operations are possible:
Addition:
Z
=
X
+
Y
=
Y
+
X
{\displaystyle Z=X+Y=Y+X}
Subtraction:
Z
=
X
−
Y
=
−
Y
+
X
{\displaystyle Z=X-Y=-Y+X}
Multiplication:
Z
=
X
Y
=
Y
X
{\displaystyle Z=XY=YX}
Division: Suppose
Y
≠
0
{\displaystyle Y\neq 0}
,
Z
=
X
/
Y
=
X
⋅
(
1
/
Y
)
=
(
1
/
Y
)
⋅
X
{\displaystyle Z=X/Y=X\cdot (1/Y)=(1/Y)\cdot X}
.
Exponentiation:
Z
=
X
Y
=
e
Y
ln
(
X
)
{\displaystyle Z=X^{Y}=e^{Y\ln(X)}}
In all cases, the variable
Z
{\displaystyle Z}
resulting from each operation is also a random variable. All commutative and associative properties of conventional algebraic operations are also valid for random variables. If any of the random variables is replaced by a deterministic variable or by a constant value, all the previous properties remain valid.
== Expectation algebra for random variables ==
The expected value
E
[
Z
]
{\displaystyle \operatorname {E} [Z]}
of the random variable
Z
{\displaystyle Z}
resulting from an algebraic operation between two random variables can be calculated using the following set of rules:
Addition:
E
[
Z
]
=
E
[
X
+
Y
]
=
E
[
X
]
+
E
[
Y
]
=
E
[
Y
]
+
E
[
X
]
{\displaystyle \operatorname {E} [Z]=\operatorname {E} [X+Y]=\operatorname {E} [X]+\operatorname {E} [Y]=\operatorname {E} [Y]+\operatorname {E} [X]}
Subtraction:
E
[
Z
]
=
E
[
X
−
Y
]
=
E
[
X
]
−
E
[
Y
]
=
−
E
[
Y
]
+
E
[
X
]
{\displaystyle \operatorname {E} [Z]=\operatorname {E} [X-Y]=\operatorname {E} [X]-\operatorname {E} [Y]=-\operatorname {E} [Y]+\operatorname {E} [X]}
Multiplication:
E
[
Z
]
=
E
[
X
Y
]
=
E
[
Y
X
]
{\displaystyle \operatorname {E} [Z]=\operatorname {E} [XY]=\operatorname {E} [YX]}
. Particularly, if
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are independent from each other, then:
E
[
X
Y
]
=
E
[
X
]
⋅
E
[
Y
]
=
E
[
Y
]
⋅
E
[
X
]
{\displaystyle \operatorname {E} [XY]=\operatorname {E} [X]\cdot \operatorname {E} [Y]=\operatorname {E} [Y]\cdot \operatorname {E} [X]}
.
Division:
E
[
Z
]
=
E
[
X
/
Y
]
=
E
[
X
⋅
(
1
/
Y
)
]
=
E
[
(
1
/
Y
)
⋅
X
]
{\displaystyle \operatorname {E} [Z]=\operatorname {E} [X/Y]=\operatorname {E} [X\cdot (1/Y)]=\operatorname {E} [(1/Y)\cdot X]}
. Particularly, if
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are independent from each other, then:
E
[
X
/
Y
]
=
E
[
X
]
⋅
E
[
1
/
Y
]
=
E
[
1
/
Y
]
⋅
E
[
X
]
{\displaystyle \operatorname {E} [X/Y]=\operatorname {E} [X]\cdot \operatorname {E} [1/Y]=\operatorname {E} [1/Y]\cdot \operatorname {E} [X]}
.
Exponentiation:
E
[
Z
]
=
E
[
X
Y
]
=
E
[
e
Y
ln
(
X
)
]
{\displaystyle \operatorname {E} [Z]=\operatorname {E} [X^{Y}]=\operatorname {E} [e^{Y\ln(X)}]}
If any of the random variables is replaced by a deterministic variable or by a constant value (
k
{\displaystyle k}
), the previous properties remain valid considering that
Pr
(
X
=
k
)
=
1
{\displaystyle \Pr(X=k)=1}
and, therefore,
E
[
X
]
=
k
{\displaystyle \operatorname {E} [X]=k}
.
If
Z
{\displaystyle Z}
is defined as a general non-linear algebraic function
f
{\displaystyle f}
of a random variable
X
{\displaystyle X}
, then:
E
[
Z
]
=
E
[
f
(
X
)
]
≠
f
(
E
[
X
]
)
{\displaystyle \operatorname {E} [Z]=\operatorname {E} [f(X)]\neq f(\operatorname {E} [X])}
Some examples of this property include:
E
[
X
2
]
≠
E
[
X
]
2
{\displaystyle \operatorname {E} [X^{2}]\neq \operatorname {E} [X]^{2}}
E
[
1
/
X
]
≠
1
/
E
[
X
]
{\displaystyle \operatorname {E} [1/X]\neq 1/\operatorname {E} [X]}
E
[
e
X
]
≠
e
E
[
X
]
{\displaystyle \operatorname {E} [e^{X}]\neq e^{\operatorname {E} [X]}}
E
[
ln
(
X
)
]
≠
ln
(
E
[
X
]
)
{\displaystyle \operatorname {E} [\ln(X)]\neq \ln(\operatorname {E} [X])}
The exact value of the expectation of the non-linear function will depend on the particular probability distribution of the random variable
X
{\displaystyle X}
.
== Variance algebra for random variables ==
The variance
Var
[
Z
]
{\displaystyle \operatorname {Var} [Z]}
of the random variable
Z
{\displaystyle Z}
resulting from an algebraic operation between random variables can be calculated using the following set of rules:
Addition:
Var
[
Z
]
=
Var
[
X
+
Y
]
=
Var
[
X
]
+
2
Cov
[
X
,
Y
]
+
Var
[
Y
]
.
{\displaystyle \operatorname {Var} [Z]=\operatorname {Var} [X+Y]=\operatorname {Var} [X]+2\operatorname {Cov} [X,Y]+\operatorname {Var} [Y].}
Particularly, if
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are independent from each other, then:
Var
[
X
+
Y
]
=
Var
[
X
]
+
Var
[
Y
]
.
{\displaystyle \operatorname {Var} [X+Y]=\operatorname {Var} [X]+\operatorname {Var} [Y].}
Subtraction:
Var
[
Z
]
=
Var
[
X
−
Y
]
=
Var
[
X
]
−
2
Cov
[
X
,
Y
]
+
Var
[
Y
]
.
{\displaystyle \operatorname {Var} [Z]=\operatorname {Var} [X-Y]=\operatorname {Var} [X]-2\operatorname {Cov} [X,Y]+\operatorname {Var} [Y].}
Particularly, if
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are independent from each other, then:
Var
[
X
−
Y
]
=
Var
[
X
]
+
Var
[
Y
]
.
{\displaystyle \operatorname {Var} [X-Y]=\operatorname {Var} [X]+\operatorname {Var} [Y].}
That is, for independent random variables the variance is the same for additions and subtractions:
Var
[
X
+
Y
]
=
Var
[
X
−
Y
]
=
Var
[
Y
−
X
]
=
Var
[
−
X
−
Y
]
.
{\displaystyle \operatorname {Var} [X+Y]=\operatorname {Var} [X-Y]=\operatorname {Var} [Y-X]=\operatorname {Var} [-X-Y].}
Multiplication:
Var
[
Z
]
=
Var
[
X
Y
]
=
Var
[
Y
X
]
.
{\displaystyle \operatorname {Var} [Z]=\operatorname {Var} [XY]=\operatorname {Var} [YX].}
Particularly, if
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are independent from each other, then:
Var
[
X
Y
]
=
E
[
X
2
]
⋅
E
[
Y
2
]
−
(
E
[
X
]
⋅
E
[
Y
]
)
2
=
Var
[
X
]
⋅
Var
[
Y
]
+
Var
[
X
]
⋅
(
E
[
Y
]
)
2
+
Var
[
Y
]
⋅
(
E
[
X
]
)
2
.
{\displaystyle {\begin{aligned}\operatorname {Var} [XY]&=\operatorname {E} [X^{2}]\cdot \operatorname {E} [Y^{2}]-{\left(\operatorname {E} [X]\cdot \operatorname {E} [Y]\right)}^{2}\\[2pt]&=\operatorname {Var} [X]\cdot \operatorname {Var} [Y]+\operatorname {Var} [X]\cdot {\left(\operatorname {E} [Y]\right)}^{2}+\operatorname {Var} [Y]\cdot {\left(\operatorname {E} [X]\right)}^{2}.\end{aligned}}}
Division:
Var
[
Z
]
=
Var
[
X
/
Y
]
=
Var
[
X
⋅
(
1
/
Y
)
]
=
Var
[
(
1
/
Y
)
⋅
X
]
.
{\displaystyle \operatorname {Var} [Z]=\operatorname {Var} [X/Y]=\operatorname {Var} [X\cdot (1/Y)]=\operatorname {Var} [(1/Y)\cdot X].}
Particularly, if
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are independent from each other, then:
Var
[
X
/
Y
]
=
E
[
X
2
]
⋅
E
[
1
/
Y
2
]
−
(
E
[
X
]
⋅
E
[
1
/
Y
]
)
2
=
Var
[
X
]
⋅
Var
[
1
/
Y
]
+
Var
[
X
]
⋅
(
E
[
1
/
Y
]
)
2
+
Var
[
1
/
Y
]
⋅
(
E
[
X
]
)
2
.
{\displaystyle {\begin{aligned}\operatorname {Var} [X/Y]&=\operatorname {E} [X^{2}]\cdot \operatorname {E} [1/Y^{2}]-{\left(\operatorname {E} [X]\cdot \operatorname {E} [1/Y]\right)}^{2}\\[2pt]&=\operatorname {Var} [X]\cdot \operatorname {Var} [1/Y]+\operatorname {Var} [X]\cdot {\left(\operatorname {E} [1/Y]\right)}^{2}+\operatorname {Var} [1/Y]\cdot {\left(\operatorname {E} [X]\right)}^{2}.\end{aligned}}}
Exponentiation:
Var
[
Z
]
=
Var
[
X
Y
]
=
Var
[
e
Y
ln
(
X
)
]
{\displaystyle \operatorname {Var} [Z]=\operatorname {Var} [X^{Y}]=\operatorname {Var} [e^{Y\ln(X)}]}
where
Cov
[
X
,
Y
]
=
Cov
[
Y
,
X
]
{\displaystyle \operatorname {Cov} [X,Y]=\operatorname {Cov} [Y,X]}
represents the covariance operator between random variables
X
{\displaystyle X}
and
Y
{\displaystyle Y}
.
The variance of a random variable can also be expressed directly in terms of the covariance or in terms of the expected value:
Var
[
X
]
=
Cov
(
X
,
X
)
=
E
[
X
2
]
−
E
[
X
]
2
{\displaystyle \operatorname {Var} [X]=\operatorname {Cov} (X,X)=\operatorname {E} [X^{2}]-\operatorname {E} [X]^{2}}
If any of the random variables is replaced by a deterministic variable or by a constant value (
k
{\displaystyle k}
), the previous properties remain valid considering that
Pr
(
X
=
k
)
=
1
{\displaystyle \Pr(X=k)=1}
and
E
[
X
]
=
k
{\displaystyle \operatorname {E} [X]=k}
,
Var
[
X
]
=
0
{\displaystyle \operatorname {Var} [X]=0}
and
Cov
[
Y
,
k
]
=
0
{\displaystyle \operatorname {Cov} [Y,k]=0}
. Special cases are the addition and multiplication of a random variable with a deterministic variable or a constant, where:
Var
[
k
+
Y
]
=
Var
[
Y
]
{\displaystyle \operatorname {Var} [k+Y]=\operatorname {Var} [Y]}
Var
[
k
Y
]
=
k
2
Var
[
Y
]
{\displaystyle \operatorname {Var} [kY]=k^{2}\operatorname {Var} [Y]}
If
Z
{\displaystyle Z}
is defined as a general non-linear algebraic function
f
{\displaystyle f}
of a random variable
X
{\displaystyle X}
, then:
Var
[
Z
]
=
Var
[
f
(
X
)
]
≠
f
(
Var
[
X
]
)
{\displaystyle \operatorname {Var} [Z]=\operatorname {Var} [f(X)]\neq f(\operatorname {Var} [X])}
The exact value of the variance of the non-linear function will depend on the particular probability distribution of the random variable
X
{\displaystyle X}
.
== Covariance algebra for random variables ==
The covariance (
Cov
[
Z
,
X
]
{\displaystyle \operatorname {Cov} [Z,X]}
) between the random variable
Z
{\displaystyle Z}
resulting from an algebraic operation and the random variable
X
{\displaystyle X}
can be calculated using the following set of rules:
Addition:
Cov
[
Z
,
X
]
=
Cov
[
X
+
Y
,
X
]
=
Var
[
X
]
+
Cov
[
X
,
Y
]
.
{\displaystyle \operatorname {Cov} [Z,X]=\operatorname {Cov} [X+Y,X]=\operatorname {Var} [X]+\operatorname {Cov} [X,Y].}
If
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are independent from each other, then:
Cov
[
X
+
Y
,
X
]
=
Var
[
X
]
.
{\displaystyle \operatorname {Cov} [X+Y,X]=\operatorname {Var} [X].}
Subtraction:
Cov
[
Z
,
X
]
=
Cov
[
X
−
Y
,
X
]
=
Var
[
X
]
−
Cov
[
X
,
Y
]
.
{\displaystyle \operatorname {Cov} [Z,X]=\operatorname {Cov} [X-Y,X]=\operatorname {Var} [X]-\operatorname {Cov} [X,Y].}
If
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are independent from each other, then:
Cov
[
X
−
Y
,
X
]
=
Var
[
X
]
.
{\displaystyle \operatorname {Cov} [X-Y,X]=\operatorname {Var} [X].}
Multiplication:
Cov
[
Z
,
X
]
=
Cov
[
X
Y
,
X
]
=
E
[
X
2
Y
]
−
E
[
X
Y
]
E
[
X
]
.
{\displaystyle \operatorname {Cov} [Z,X]=\operatorname {Cov} [XY,X]=\operatorname {E} [X^{2}Y]-\operatorname {E} [XY]\operatorname {E} [X].}
If
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are independent from each other, then:
Cov
[
X
Y
,
X
]
=
Var
[
X
]
⋅
E
[
Y
]
.
{\displaystyle \operatorname {Cov} [XY,X]=\operatorname {Var} [X]\cdot \operatorname {E} [Y].}
Division (covariance with respect to the numerator):
Cov
[
Z
,
X
]
=
Cov
[
X
/
Y
,
X
]
=
E
[
X
2
/
Y
]
−
E
[
X
/
Y
]
E
[
X
]
.
{\displaystyle \operatorname {Cov} [Z,X]=\operatorname {Cov} [X/Y,X]=\operatorname {E} [X^{2}/Y]-\operatorname {E} [X/Y]\operatorname {E} [X].}
If
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are independent from each other, then:
Cov
[
X
/
Y
,
X
]
=
Var
[
X
]
⋅
E
[
1
/
Y
]
.
{\displaystyle \operatorname {Cov} [X/Y,X]=\operatorname {Var} [X]\cdot \operatorname {E} [1/Y].}
Division (covariance with respect to the denominator):
Cov
[
Z
,
X
]
=
Cov
[
Y
/
X
,
X
]
=
E
[
Y
]
−
E
[
Y
/
X
]
E
[
X
]
.
{\displaystyle \operatorname {Cov} [Z,X]=\operatorname {Cov} [Y/X,X]=\operatorname {E} [Y]-\operatorname {E} [Y/X]\operatorname {E} [X].}
If
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are independent from each other, then:
Cov
[
Y
/
X
,
X
]
=
E
[
Y
]
⋅
(
1
−
E
[
X
]
⋅
E
[
1
/
X
]
)
.
{\displaystyle \operatorname {Cov} [Y/X,X]=\operatorname {E} [Y]\cdot (1-\operatorname {E} [X]\cdot \operatorname {E} [1/X]).}
Exponentiation (covariance with respect to the base):
Cov
[
Z
,
X
]
=
Cov
[
X
Y
,
X
]
=
E
[
X
Y
+
1
]
−
E
[
X
Y
]
E
[
X
]
.
{\displaystyle \operatorname {Cov} [Z,X]=\operatorname {Cov} [X^{Y},X]=\operatorname {E} [X^{Y+1}]-\operatorname {E} [X^{Y}]\operatorname {E} [X].}
Exponentiation (covariance with respect to the power):
Cov
[
Z
,
X
]
=
Cov
[
Y
X
,
X
]
=
E
[
X
Y
X
]
−
E
[
Y
X
]
E
[
X
]
.
{\displaystyle \operatorname {Cov} [Z,X]=\operatorname {Cov} [Y^{X},X]=\operatorname {E} [XY^{X}]-\operatorname {E} [Y^{X}]\operatorname {E} [X].}
The covariance of a random variable can also be expressed directly in terms of the expected value:
Cov
(
X
,
Y
)
=
E
[
X
Y
]
−
E
[
X
]
E
[
Y
]
{\displaystyle \operatorname {Cov} (X,Y)=\operatorname {E} [XY]-\operatorname {E} [X]\operatorname {E} [Y]}
If any of the random variables is replaced by a deterministic variable or by a constant value (
k
{\displaystyle k}
), the previous properties remain valid considering that
E
[
k
]
=
k
{\displaystyle \operatorname {E} [k]=k}
,
Var
[
k
]
=
0
{\displaystyle \operatorname {Var} [k]=0}
and
Cov
[
X
,
k
]
=
0
{\displaystyle \operatorname {Cov} [X,k]=0}
.
If
Z
{\displaystyle Z}
is defined as a general non-linear algebraic function
f
{\displaystyle f}
of a random variable
X
{\displaystyle X}
, then:
Cov
[
Z
,
X
]
=
Cov
[
f
(
X
)
,
X
]
=
E
[
X
f
(
X
)
]
−
E
[
f
(
X
)
]
E
[
X
]
{\displaystyle \operatorname {Cov} [Z,X]=\operatorname {Cov} [f(X),X]=\operatorname {E} [Xf(X)]-\operatorname {E} [f(X)]\operatorname {E} [X]}
The exact value of the covariance of the non-linear function will depend on the particular probability distribution of the random variable
X
{\displaystyle X}
.
== Approximations by Taylor series expansions of moments ==
If the moments of a certain random variable
X
{\displaystyle X}
are known (or can be determined by integration if the probability density function is known), then it is possible to approximate the expected value of any general non-linear function
f
(
X
)
{\displaystyle f(X)}
as a Taylor series expansion of the moments, as follows:
f
(
X
)
=
∑
n
=
0
∞
1
n
!
(
d
n
f
d
X
n
)
X
=
μ
(
X
−
μ
)
n
,
{\displaystyle f(X)=\sum _{n=0}^{\infty }{\frac {1}{n!}}\left({\frac {d^{n}f}{dX^{n}}}\right)_{X=\mu }{\left(X-\mu \right)}^{n},}
where
μ
=
E
[
X
]
{\displaystyle \mu =\operatorname {E} [X]}
is the mean value of
X
{\displaystyle X}
.
E
[
f
(
X
)
]
=
E
[
∑
n
=
0
∞
1
n
!
(
d
n
f
d
X
n
)
X
=
μ
(
X
−
μ
)
n
]
=
∑
n
=
0
∞
1
n
!
(
d
n
f
d
X
n
)
X
=
μ
E
[
(
X
−
μ
)
n
]
=
∑
n
=
0
∞
1
n
!
(
d
n
f
d
X
n
)
X
=
μ
μ
n
(
X
)
,
{\displaystyle {\begin{aligned}\operatorname {E} [f(X)]&=\operatorname {E} \left[\sum _{n=0}^{\infty }{\frac {1}{n!}}\left({d^{n}f \over dX^{n}}\right)_{X=\mu }{\left(X-\mu \right)}^{n}\right]\\&=\sum _{n=0}^{\infty }{\frac {1}{n!}}\left({\frac {d^{n}f}{dX^{n}}}\right)_{X=\mu }\operatorname {E} \left[{\left(X-\mu \right)}^{n}\right]\\&=\sum _{n=0}^{\infty }{\frac {1}{n!}}\left({d^{n}f \over dX^{n}}\right)_{X=\mu }\mu _{n}(X),\end{aligned}}}
where
μ
n
(
X
)
=
E
[
(
X
−
μ
)
n
]
{\displaystyle \mu _{n}(X)=\operatorname {E} [(X-\mu )^{n}]}
is the n-th moment of
X
{\displaystyle X}
about its mean. Note that by their definition,
μ
0
(
X
)
=
1
{\displaystyle \mu _{0}(X)=1}
and
μ
1
(
X
)
=
0
{\displaystyle \mu _{1}(X)=0}
. The first order term always vanishes but was kept to obtain a closed form expression.
Then,
E
[
f
(
X
)
]
≈
∑
n
=
0
n
max
1
n
!
(
d
n
f
d
X
n
)
X
=
μ
μ
n
(
X
)
,
{\displaystyle \operatorname {E} [f(X)]\approx \sum _{n=0}^{n_{\max }}{\frac {1}{n!}}\left({\frac {d^{n}f}{dX^{n}}}\right)_{X=\mu }\mu _{n}(X),}
where the Taylor expansion is truncated after the
n
max
{\displaystyle n_{\max }}
-th moment.
Particularly for functions of normal random variables, it is possible to obtain a Taylor expansion in terms of the standard normal distribution:
f
(
X
)
=
∑
n
=
0
∞
σ
n
n
!
(
d
n
f
d
X
n
)
X
=
μ
μ
n
(
Z
)
,
{\displaystyle f(X)=\sum _{n=0}^{\infty }{\frac {\sigma ^{n}}{n!}}\left({\frac {d^{n}f}{dX^{n}}}\right)_{X=\mu }\mu _{n}(Z),}
where
X
∼
N
(
μ
,
σ
2
)
{\displaystyle X\sim N(\mu ,\sigma ^{2})}
is a normal random variable, and
Z
∼
N
(
0
,
1
)
{\displaystyle Z\sim N(0,1)}
is the standard normal distribution. Thus,
E
[
f
(
X
)
]
≈
∑
n
=
0
n
max
σ
n
n
!
(
d
n
f
d
X
n
)
X
=
μ
μ
n
(
Z
)
,
{\displaystyle \operatorname {E} [f(X)]\approx \sum _{n=0}^{n_{\max }}{\sigma ^{n} \over n!}\left({d^{n}f \over dX^{n}}\right)_{X=\mu }\mu _{n}(Z),}
where the moments of the standard normal distribution are given by:
μ
n
(
Z
)
=
{
∏
i
=
1
n
/
2
(
2
i
−
1
)
,
if
n
is even
0
,
if
n
is odd
{\displaystyle \mu _{n}(Z)={\begin{cases}\prod _{i=1}^{n/2}(2i-1),&{\text{if }}n{\text{ is even}}\\0,&{\text{if }}n{\text{ is odd}}\end{cases}}}
Similarly for normal random variables, it is also possible to approximate the variance of the non-linear function as a Taylor series expansion as:
Var
[
f
(
X
)
]
≈
∑
n
=
1
n
max
(
σ
n
n
!
(
d
n
f
d
X
n
)
X
=
μ
)
2
Var
[
Z
n
]
+
∑
n
=
1
n
max
∑
m
≠
n
σ
n
+
m
n
!
m
!
(
d
n
f
d
X
n
)
X
=
μ
(
d
m
f
d
X
m
)
X
=
μ
Cov
[
Z
n
,
Z
m
]
,
{\displaystyle \operatorname {Var} [f(X)]\approx \sum _{n=1}^{n_{\max }}\left({\sigma ^{n} \over n!}\left({d^{n}f \over dX^{n}}\right)_{X=\mu }\right)^{2}\operatorname {Var} [Z^{n}]+\sum _{n=1}^{n_{\max }}\sum _{m\neq n}{\frac {\sigma ^{n+m}}{n!m!}}\left({d^{n}f \over dX^{n}}\right)_{X=\mu }\left({d^{m}f \over dX^{m}}\right)_{X=\mu }\operatorname {Cov} [Z^{n},Z^{m}],}
where
Var
[
Z
n
]
=
{
∏
i
=
1
n
(
2
i
−
1
)
−
∏
i
=
1
n
/
2
(
2
i
−
1
)
2
,
if
n
is even
∏
i
=
1
n
(
2
i
−
1
)
,
if
n
is odd
,
{\displaystyle \operatorname {Var} [Z^{n}]={\begin{cases}\prod _{i=1}^{n}(2i-1)-\prod _{i=1}^{n/2}(2i-1)^{2},&{\text{if }}n{\text{ is even}}\\\prod _{i=1}^{n}(2i-1),&{\text{if }}n{\text{ is odd}},\end{cases}}}
and
Cov
[
Z
n
,
Z
m
]
=
{
∏
i
=
1
(
n
+
m
)
/
2
(
2
i
−
1
)
−
∏
i
=
1
n
/
2
(
2
i
−
1
)
∏
j
=
1
m
/
2
(
2
j
−
1
)
,
if
n
and
m
are even
∏
i
=
1
(
n
+
m
)
/
2
(
2
i
−
1
)
,
if
n
and
m
are odd
0
,
otherwise
{\displaystyle \operatorname {Cov} [Z^{n},Z^{m}]={\begin{cases}\prod _{i=1}^{(n+m)/2}(2i-1)-\prod _{i=1}^{n/2}(2i-1)\prod _{j=1}^{m/2}(2j-1),&{\text{if }}n{\text{ and }}m{\text{ are even}}\\\prod _{i=1}^{(n+m)/2}(2i-1),&{\text{if }}n{\text{ and }}m{\text{ are odd}}\\0,&{\text{otherwise}}\end{cases}}}
== Algebra of complex random variables ==
In the algebraic axiomatization of probability theory, the primary concept is not that of probability of an event, but rather that of a random variable. Probability distributions are determined by assigning an expectation to each random variable. The measurable space and the probability measure arise from the random variables and expectations by means of well-known representation theorems of analysis. One of the important features of the algebraic approach is that apparently infinite-dimensional probability distributions are not harder to formalize than finite-dimensional ones.
Random variables are assumed to have the following properties:
complex constants are possible realizations of a random variable;
the sum of two random variables is a random variable;
the product of two random variables is a random variable;
addition and multiplication of random variables are both commutative; and
there is a notion of conjugation of random variables, satisfying (XY)* = Y*X* and X** = X for all random variables X,Y and coinciding with complex conjugation if X is a constant.
This means that random variables form complex commutative *-algebras. If X = X* then the random variable X is called "real".
An expectation E on an algebra A of random variables is a normalized, positive linear functional. What this means is that
E[k] = k where k is a constant;
E[X*X] ≥ 0 for all random variables X;
E[X + Y] = E[X] + E[Y] for all random variables X and Y; and
E[kX] = kE[X] if k is a constant.
One may generalize this setup, allowing the algebra to be noncommutative. This leads to other areas of noncommutative probability such as quantum probability, random matrix theory, and free probability.
== See also ==
Relationships among probability distributions
Ratio distribution
Cauchy distribution
Slash distribution
Inverse distribution
Product distribution
Mellin transform
Sum of normally distributed random variables
List of convolutions of probability distributions – the probability measure of the sum of independent random variables is the convolution of their probability measures.
Law of total expectation
Law of total variance
Law of total covariance
Law of total cumulance
Taylor expansions for the moments of functions of random variables
Delta method
== References ==
== Further reading ==
Whittle, Peter (2000). Probability via Expectation (4th ed.). New York, NY: Springer. ISBN 978-0-387-98955-6. Retrieved 24 September 2012.
Springer, Melvin Dale (1979). The Algebra of Random Variables. Wiley. ISBN 0-471-01406-0. Retrieved 24 September 2012.
"Measure algebra", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Algebra_of_random_variables |
In probability theory, Bernstein inequalities give bounds on the probability that the sum of random variables deviates from its mean. In the simplest case, let X1, ..., Xn be independent Bernoulli random variables taking values +1 and −1 with probability 1/2 (this distribution is also known as the Rademacher distribution), then for every positive
ε
{\displaystyle \varepsilon }
,
P
(
|
1
n
∑
i
=
1
n
X
i
|
>
ε
)
≤
2
exp
(
−
n
ε
2
2
(
1
+
ε
3
)
)
.
{\displaystyle \mathbb {P} \left(\left|{\frac {1}{n}}\sum _{i=1}^{n}X_{i}\right|>\varepsilon \right)\leq 2\exp \left(-{\frac {n\varepsilon ^{2}}{2(1+{\frac {\varepsilon }{3}})}}\right).}
Bernstein inequalities were proven and published by Sergei Bernstein in the 1920s and 1930s. Later, these inequalities were rediscovered several times in various forms. Thus, special cases of the Bernstein inequalities are also known as the Chernoff bound, Hoeffding's inequality and Azuma's inequality.
The martingale case of the Bernstein inequality
is known as Freedman's inequality and its refinement
is known as Hoeffding's inequality.
== Some of the inequalities ==
1. Let
X
1
,
…
,
X
n
{\displaystyle X_{1},\ldots ,X_{n}}
be independent zero-mean random variables. Suppose that
|
X
i
|
≤
M
{\displaystyle |X_{i}|\leq M}
almost surely, for all
i
.
{\displaystyle i.}
Then, for all positive
t
{\displaystyle t}
,
P
(
∑
i
=
1
n
X
i
≥
t
)
≤
exp
(
−
1
2
t
2
∑
i
=
1
n
E
[
X
i
2
]
+
1
3
M
t
)
.
{\displaystyle \mathbb {P} \left(\sum _{i=1}^{n}X_{i}\geq t\right)\leq \exp \left(-{\frac {{\tfrac {1}{2}}t^{2}}{\sum _{i=1}^{n}\mathbb {E} \left[X_{i}^{2}\right]+{\tfrac {1}{3}}Mt}}\right).}
2. Let
X
1
,
…
,
X
n
{\displaystyle X_{1},\ldots ,X_{n}}
be independent zero-mean random variables. Suppose that for some positive real
L
{\displaystyle L}
and every integer
k
≥
2
{\displaystyle k\geq 2}
,
E
[
|
X
i
k
|
]
≤
1
2
E
[
X
i
2
]
L
k
−
2
k
!
{\displaystyle \mathbb {E} \left[\left|X_{i}^{k}\right|\right]\leq {\frac {1}{2}}\mathbb {E} \left[X_{i}^{2}\right]L^{k-2}k!}
Then
P
(
∑
i
=
1
n
X
i
≥
2
t
∑
E
[
X
i
2
]
)
<
exp
(
−
t
2
)
,
for
0
≤
t
≤
1
2
L
∑
E
[
X
j
2
]
.
{\displaystyle \mathbb {P} \left(\sum _{i=1}^{n}X_{i}\geq 2t{\sqrt {\sum \mathbb {E} \left[X_{i}^{2}\right]}}\right)<\exp(-t^{2}),\qquad {\text{for}}\quad 0\leq t\leq {\frac {1}{2L}}{\sqrt {\sum \mathbb {E} \left[X_{j}^{2}\right]}}.}
3. Let
X
1
,
…
,
X
n
{\displaystyle X_{1},\ldots ,X_{n}}
be independent zero-mean random variables. Suppose that
E
[
|
X
i
k
|
]
≤
k
!
4
!
(
L
5
)
k
−
4
{\displaystyle \mathbb {E} \left[\left|X_{i}^{k}\right|\right]\leq {\frac {k!}{4!}}\left({\frac {L}{5}}\right)^{k-4}}
for all integer
k
≥
4.
{\displaystyle k\geq 4.}
Denote
A
k
=
∑
E
[
X
i
k
]
.
{\displaystyle A_{k}=\sum \mathbb {E} \left[X_{i}^{k}\right].}
Then,
P
(
|
∑
j
=
1
n
X
j
−
A
3
t
2
3
A
2
|
≥
2
A
2
t
[
1
+
A
4
t
2
6
A
2
2
]
)
<
2
exp
(
−
t
2
)
,
for
0
<
t
≤
5
2
A
2
4
L
.
{\displaystyle \mathbb {P} \left(\left|\sum _{j=1}^{n}X_{j}-{\frac {A_{3}t^{2}}{3A_{2}}}\right|\geq {\sqrt {2A_{2}}}\,t\left[1+{\frac {A_{4}t^{2}}{6A_{2}^{2}}}\right]\right)<2\exp(-t^{2}),\qquad {\text{for}}\quad 0<t\leq {\frac {5{\sqrt {2A_{2}}}}{4L}}.}
4. Bernstein also proved generalizations of the inequalities above to weakly dependent random variables. For example, inequality (2) can be extended as follows. Let
X
1
,
…
,
X
n
{\displaystyle X_{1},\ldots ,X_{n}}
be possibly non-independent random variables. Suppose that for all integers
i
>
0
{\displaystyle i>0}
,
E
[
X
i
|
X
1
,
…
,
X
i
−
1
]
=
0
,
E
[
X
i
2
|
X
1
,
…
,
X
i
−
1
]
≤
R
i
E
[
X
i
2
]
,
E
[
X
i
k
|
X
1
,
…
,
X
i
−
1
]
≤
1
2
E
[
X
i
2
|
X
1
,
…
,
X
i
−
1
]
L
k
−
2
k
!
{\displaystyle {\begin{aligned}\mathbb {E} \left.\left[X_{i}\right|X_{1},\ldots ,X_{i-1}\right]&=0,\\\mathbb {E} \left.\left[X_{i}^{2}\right|X_{1},\ldots ,X_{i-1}\right]&\leq R_{i}\mathbb {E} \left[X_{i}^{2}\right],\\\mathbb {E} \left.\left[X_{i}^{k}\right|X_{1},\ldots ,X_{i-1}\right]&\leq {\tfrac {1}{2}}\mathbb {E} \left.\left[X_{i}^{2}\right|X_{1},\ldots ,X_{i-1}\right]L^{k-2}k!\end{aligned}}}
Then
P
(
∑
i
=
1
n
X
i
≥
2
t
∑
i
=
1
n
R
i
E
[
X
i
2
]
)
<
exp
(
−
t
2
)
,
for
0
<
t
≤
1
2
L
∑
i
=
1
n
R
i
E
[
X
i
2
]
.
{\displaystyle \mathbb {P} \left(\sum _{i=1}^{n}X_{i}\geq 2t{\sqrt {\sum _{i=1}^{n}R_{i}\mathbb {E} \left[X_{i}^{2}\right]}}\right)<\exp(-t^{2}),\qquad {\text{for}}\quad 0<t\leq {\frac {1}{2L}}{\sqrt {\sum _{i=1}^{n}R_{i}\mathbb {E} \left[X_{i}^{2}\right]}}.}
More general results for martingales can be found in Fan et al. (2015).
== Proofs ==
The proofs are based on an application of Markov's inequality to the random variable
exp
(
λ
∑
j
=
1
n
X
j
)
,
{\displaystyle \exp \left(\lambda \sum _{j=1}^{n}X_{j}\right),}
for a suitable choice of the parameter
λ
>
0
{\displaystyle \lambda >0}
.
== Generalizations ==
The Bernstein inequality can be generalized to Gaussian random matrices. Let
G
=
g
H
A
g
+
2
Re
(
g
H
a
)
{\displaystyle G=g^{H}Ag+2\operatorname {Re} (g^{H}a)}
be a scalar where
A
{\displaystyle A}
is a complex Hermitian matrix and
a
{\displaystyle a}
is complex vector of size
N
{\displaystyle N}
. The vector
g
∼
C
N
(
0
,
I
)
{\displaystyle g\sim {\mathcal {CN}}(0,I)}
is a Gaussian vector of size
N
{\displaystyle N}
. Then for any
σ
≥
0
{\displaystyle \sigma \geq 0}
, we have
P
(
G
≤
tr
(
A
)
−
2
σ
‖
vec
(
A
)
‖
2
+
2
‖
a
‖
2
−
σ
s
−
(
A
)
)
<
exp
(
−
σ
)
,
{\displaystyle \mathbb {P} \left(G\leq \operatorname {tr} (A)-{\sqrt {2\sigma }}{\sqrt {\Vert \operatorname {vec} (A)\Vert ^{2}+2\Vert a\Vert ^{2}}}-\sigma s^{-}(A)\right)<\exp(-\sigma ),}
where
vec
{\displaystyle \operatorname {vec} }
is the vectorization operation and
s
−
(
A
)
=
max
(
−
λ
max
(
A
)
,
0
)
{\displaystyle s^{-}(A)=\max(-\lambda _{\max }(A),0)}
where
λ
max
(
A
)
{\displaystyle \lambda _{\max }(A)}
is the largest eigenvalue of
A
{\displaystyle A}
. The proof is detailed here. Another similar inequality is formulated as
P
(
G
≥
tr
(
A
)
+
2
σ
‖
vec
(
A
)
‖
2
+
2
‖
a
‖
2
+
σ
s
+
(
A
)
)
<
exp
(
−
σ
)
,
{\displaystyle \mathbb {P} \left(G\geq \operatorname {tr} (A)+{\sqrt {2\sigma }}{\sqrt {\Vert \operatorname {vec} (A)\Vert ^{2}+2\Vert a\Vert ^{2}}}+\sigma s^{+}(A)\right)<\exp(-\sigma ),}
where
s
+
(
A
)
=
max
(
λ
max
(
A
)
,
0
)
{\displaystyle s^{+}(A)=\max(\lambda _{\max }(A),0)}
.
== See also ==
Concentration inequality - a summary of tail-bounds on random variables.
Hoeffding's inequality
== References ==
(according to: S.N.Bernstein, Collected Works, Nauka, 1964)
A modern translation of some of these results can also be found in Prokhorov, A.V.; Korneichuk, N.P.; Motornyi, V.P. (2001) [1994], "Bernstein inequality", Encyclopedia of Mathematics, EMS Press | Wikipedia/Bernstein_inequalities_(probability_theory) |
In theoretical computer science, the algorithmic Lovász local lemma gives an algorithmic way of constructing objects that obey a system of constraints with limited dependence.
Given a finite set of bad events {A1, ..., An} in a probability space with limited dependence amongst the Ais and with specific bounds on their respective probabilities, the Lovász local lemma proves that with non-zero probability all of these events can be avoided. However, the lemma is non-constructive in that it does not provide any insight on how to avoid the bad events.
If the events {A1, ..., An} are determined by a finite collection of mutually independent random variables, a simple Las Vegas algorithm with expected polynomial runtime proposed by Robin Moser and Gábor Tardos can compute an assignment to the random variables such that all events are avoided.
== Review of Lovász local lemma ==
The Lovász Local Lemma is a powerful tool commonly used in the probabilistic method to prove the existence of certain complex mathematical objects with a set of prescribed features. A typical proof proceeds by operating on the complex object in a random manner and uses the Lovász Local Lemma to bound the probability that any of the features is missing. The absence of a feature is considered a bad event and if it can be shown that all such bad events can be avoided simultaneously with non-zero probability, the existence follows. The lemma itself reads as follows:
Let
A
=
{
A
1
,
…
,
A
n
}
{\displaystyle {\mathcal {A}}=\{A_{1},\ldots ,A_{n}\}}
be a finite set of events in the probability space Ω. For
A
∈
A
{\displaystyle A\in {\mathcal {A}}}
let
Γ
(
A
)
{\displaystyle \Gamma (A)}
denote a subset of
A
{\displaystyle {\mathcal {A}}}
such that
A
{\displaystyle A}
is independent from the collection of events
A
∖
(
{
A
}
∪
Γ
(
A
)
)
{\displaystyle {\mathcal {A}}\setminus (\{A\}\cup \Gamma (A))}
. If there exists an assignment of reals
x
:
A
→
(
0
,
1
)
{\displaystyle x:{\mathcal {A}}\rightarrow (0,1)}
to the events such that
∀
A
∈
A
:
Pr
[
A
]
≤
x
(
A
)
∏
B
∈
Γ
(
A
)
(
1
−
x
(
B
)
)
{\displaystyle \forall A\in {\mathcal {A}}:\Pr[A]\leq x(A)\prod _{B\in \Gamma (A)}(1-x(B))}
then the probability of avoiding all events in
A
{\displaystyle {\mathcal {A}}}
is positive, in particular
Pr
[
A
1
¯
∧
⋯
∧
A
n
¯
]
≥
∏
A
∈
A
(
1
−
x
(
A
)
)
.
{\displaystyle \Pr \left[\,{\overline {A_{1}}}\wedge \cdots \wedge {\overline {A_{n}}}\,\right]\geq \prod _{A\in {\mathcal {A}}}(1-x(A)).}
== Algorithmic version of the Lovász local lemma ==
The Lovász Local Lemma is non-constructive because it only allows us to conclude the existence of structural properties or complex objects but does not indicate how these can be found or constructed efficiently in practice. Note that random sampling from the probability space Ω is likely to be inefficient, since the probability of the event of interest
Pr
[
A
1
¯
∧
⋯
∧
A
n
¯
]
{\displaystyle \Pr \left[{\overline {A_{1}}}\wedge \cdots \wedge {\overline {A_{n}}}\right]}
is only bounded by a product of small numbers
∏
A
∈
A
(
1
−
x
(
A
)
)
{\displaystyle \prod _{A\in {\mathcal {A}}}(1-x(A))}
and therefore likely to be very small.
Under the assumption that all of the events in
A
{\displaystyle {\mathcal {A}}}
are determined by a finite collection of mutually independent random variables
P
{\displaystyle {\mathcal {P}}}
in Ω, Robin Moser and Gábor Tardos proposed an efficient randomized algorithm that computes an assignment to the random variables in
P
{\displaystyle {\mathcal {P}}}
such that all events in
A
{\displaystyle {\mathcal {A}}}
are avoided.
Hence, this algorithm can be used to efficiently construct witnesses of complex objects with prescribed features for most problems to which the Lovász Local Lemma applies.
=== History ===
Prior to the recent work of Moser and Tardos, earlier work had also made progress in developing algorithmic versions of the Lovász Local Lemma. József Beck in 1991 first gave proof that an algorithmic version was possible. In this breakthrough result, a stricter requirement was imposed upon the problem formulation than in the original non-constructive definition. Beck's approach required that for each
A
∈
A
{\displaystyle A\in {\mathcal {A}}}
, the number of dependencies of A was bounded above with
|
Γ
(
A
)
|
<
2
n
/
48
{\displaystyle |\Gamma (A)|<2^{n/48}}
(approximately). The existential version of the Local Lemma permits a larger upper bound on dependencies:
|
Γ
(
A
)
|
<
2
n
e
,
{\displaystyle |\Gamma (A)|<{\frac {2^{n}}{e}},}
This bound is known to be tight. Since the initial algorithm, work has been done to push algorithmic versions of the Local Lemma closer to this tight value. Moser and Tardos's recent work are the most recent in this chain, and provide an algorithm that achieves this tight bound.
=== Algorithm ===
Let us first introduce some concepts that are used in the algorithm.
For any random variable
P
∈
P
,
v
P
{\displaystyle P\in {\mathcal {P}},v_{P}}
denotes the current assignment (evaluation) of P. An assignment (evaluation) to all random variables is denoted
(
v
P
)
P
{\displaystyle (v_{P})_{\mathcal {P}}}
.
The unique minimal subset of random variables in
P
{\displaystyle {\mathcal {P}}}
that determine the event A is denoted by vbl(A).
If the event A is true under an evaluation
(
v
P
)
P
{\displaystyle (v_{P})_{\mathcal {P}}}
, we say that
(
v
P
)
P
{\displaystyle (v_{P})_{\mathcal {P}}}
satisfies A, otherwise it avoids A.
Given a set of bad events
A
{\displaystyle {\mathcal {A}}}
we wish to avoid that is determined by a collection of mutually independent random variables
P
{\displaystyle {\mathcal {P}}}
, the algorithm proceeds as follows:
∀
P
∈
P
{\displaystyle \forall P\in {\mathcal {P}}}
:
v
P
←
{\displaystyle v_{P}\leftarrow }
a random evaluation of P
while
∃
A
∈
A
{\displaystyle \exists A\in {\mathcal {A}}}
such that A is satisfied by
(
v
P
)
P
{\displaystyle (v_{P})_{\mathcal {P}}}
pick an arbitrary satisfied event
A
∈
A
{\displaystyle A\in {\mathcal {A}}}
∀
P
∈
vbl
(
A
)
{\displaystyle \forall P\in {\text{vbl}}(A)}
:
v
P
←
{\displaystyle v_{P}\leftarrow }
a new random evaluation of P
return
(
v
P
)
P
{\displaystyle (v_{P})_{\mathcal {P}}}
In the first step, the algorithm randomly initializes the current assignment vP for each random variable
P
∈
P
{\displaystyle P\in {\mathcal {P}}}
. This means that an assignment vP is sampled randomly and independently according to the distribution of the random variable P.
The algorithm then enters the main loop which is executed until all events in
A
{\displaystyle {\mathcal {A}}}
are avoided, at which point the algorithm returns the current assignment. At each iteration of the main loop, the algorithm picks an arbitrary satisfied event A (either randomly or deterministically) and resamples all the random variables that determine A.
=== Main theorem ===
Let
P
{\displaystyle {\mathcal {P}}}
be a finite set of mutually independent random variables in the probability space Ω. Let
A
{\displaystyle {\mathcal {A}}}
be a finite set of events determined by these variables. If there exists an assignment of reals
x
:
A
→
(
0
,
1
)
{\displaystyle x:{\mathcal {A}}\to (0,1)}
to the events such that
∀
A
∈
A
:
Pr
[
A
]
≤
x
(
A
)
∏
B
∈
Γ
(
A
)
(
1
−
x
(
B
)
)
{\displaystyle \forall A\in {\mathcal {A}}:\Pr[A]\leq x(A)\prod _{B\in \Gamma (A)}(1-x(B))}
then there exists an assignment of values to the variables
P
{\displaystyle {\mathcal {P}}}
avoiding all of the events in
A
{\displaystyle {\mathcal {A}}}
.
Moreover, the randomized algorithm described above resamples an event
A
∈
A
{\displaystyle A\in {\mathcal {A}}}
at most an expected
x
(
A
)
1
−
x
(
A
)
{\displaystyle {\frac {x(A)}{1-x(A)}}}
times before it finds such an evaluation. Thus the expected total number of resampling steps and therefore the expected runtime of the algorithm is at most
∑
A
∈
A
x
(
A
)
1
−
x
(
A
)
.
{\displaystyle \sum _{A\in {\mathcal {A}}}{\frac {x(A)}{1-x(A)}}.}
The proof of this theorem using the method of entropy compression can be found in the paper by Moser and Tardos
=== Symmetric version ===
The requirement of an assignment function x satisfying a set of inequalities in the theorem above is complex and not intuitive. But this requirement can be replaced by three simple conditions:
∀
A
∈
A
:
|
Γ
(
A
)
|
≤
D
{\displaystyle \forall A\in {\mathcal {A}}:|\Gamma (A)|\leq D}
, i.e. each event A depends on at most D other events,
∀
A
∈
A
:
Pr
[
A
]
≤
p
{\displaystyle \forall A\in {\mathcal {A}}:\Pr[A]\leq p}
, i.e. the probability of each event A is at most p,
e
p
(
D
+
1
)
≤
1
{\displaystyle ep(D+1)\leq 1}
, where e is the base of the natural logarithm.
The version of the Lovász Local Lemma with these three conditions instead of the assignment function x is called the Symmetric Lovász Local Lemma. We can also state the Symmetric Algorithmic Lovász Local Lemma:
Let
P
{\displaystyle {\mathcal {P}}}
be a finite set of mutually independent random variables and
A
{\displaystyle {\mathcal {A}}}
be a finite set of events determined by these variables as before. If the above three conditions hold then there exists an assignment of values to the variables
P
{\displaystyle {\mathcal {P}}}
avoiding all of the events in
A
{\displaystyle {\mathcal {A}}}
.
Moreover, the randomized algorithm described above resamples an event
A
∈
A
{\displaystyle A\in {\mathcal {A}}}
at most an expected
1
D
{\displaystyle {\frac {1}{D}}}
times before it finds such an evaluation. Thus the expected total number of resampling steps and therefore the expected runtime of the algorithm is at most
n
D
{\displaystyle {\frac {n}{D}}}
.
== Example ==
The following example illustrates how the algorithmic version of the Lovász Local Lemma can be applied to a simple problem.
Let Φ be a CNF formula over variables X1, ..., Xn, containing n clauses, and with at least k literals in each clause, and with each variable Xi appearing in at most
2
k
k
e
{\displaystyle {\frac {2^{k}}{ke}}}
clauses. Then, Φ is satisfiable.
This statement can be proven easily using the symmetric version of the Algorithmic Lovász Local Lemma. Let X1, ..., Xn be the set of mutually independent random variables
P
{\displaystyle {\mathcal {P}}}
which are sampled uniformly at random.
Firstly, we truncate each clause in Φ to contain exactly k literals. Since each clause is a disjunction, this does not harm satisfiability, for if we can find a satisfying assignment for the truncated formula, it can easily be extended to a satisfying assignment for the original formula by reinserting the truncated literals.
Now, define a bad event Aj for each clause in Φ, where Aj is the event that clause j in Φ is unsatisfied by the current assignment. Since each clause contains k literals (and therefore k variables) and since all variables are sampled uniformly at random, we can bound the probability of each bad event by
Pr
[
A
j
]
=
p
=
2
−
k
.
{\displaystyle \Pr[A_{j}]=p=2^{-k}.}
Since each variable can appear in at most
2
k
k
e
{\displaystyle {\frac {2^{k}}{ke}}}
clauses and there are k variables in each clause, each bad event Aj can depend on at most
D
=
k
(
2
k
k
e
−
1
)
≤
2
k
e
−
1
{\displaystyle D=k\left({\frac {2^{k}}{ke}}-1\right)\leq {\frac {2^{k}}{e}}-1}
other events. Therefore:
D
+
1
≤
2
k
e
,
{\displaystyle D+1\leq {\frac {2^{k}}{e}},}
multiplying both sides by ep we get:
e
p
(
D
+
1
)
≤
e
2
−
k
2
k
e
=
1
{\displaystyle ep(D+1)\leq e2^{-k}{\frac {2^{k}}{e}}=1}
it follows by the symmetric Lovász Local Lemma that the probability of a random assignment to X1, ..., Xn satisfying all clauses in Φ is non-zero and hence such an assignment must exist.
Now, the Algorithmic Lovász Local Lemma actually allows us to efficiently compute such an assignment by applying the algorithm described above. The algorithm proceeds as follows:
It starts with a random truth value assignment to the variables X1, ..., Xn sampled uniformly at random. While there exists a clause in Φ that is unsatisfied, it randomly picks an unsatisfied clause C in Φ and assigns a new truth value to all variables that appear in C chosen uniformly at random. Once all clauses in Φ are satisfied, the algorithm returns the current assignment. Hence, the Algorithmic Lovász Local Lemma proves that this algorithm has an expected runtime of at most
n
2
k
e
−
k
{\displaystyle {\frac {n}{{\frac {2^{k}}{e}}-k}}}
steps on CNF formulas that satisfy the two conditions above. A stronger version of the above statement is proven by Moser, see also Berman, Karpinski and Scott.
The algorithm is similar to WalkSAT which is used to solve general boolean satisfiability problems. The main difference is that in WalkSAT, after the unsatisfied clause C is selected, a single variable in C is selected at random and has its value flipped (which can be viewed as selecting uniformly among only
k
{\displaystyle k}
rather than all
2
k
{\displaystyle 2^{k}}
value assignments to C).
== Applications ==
As mentioned before, the Algorithmic Version of the Lovász Local Lemma applies to most problems for which the general Lovász Local Lemma is used as a proof technique. Some of these problems are discussed in the following articles:
Probabilistic proofs of non-probabilistic theorems
Random graph
== Parallel version ==
The algorithm described above lends itself well to parallelization, since resampling two independent events
A
,
B
∈
A
{\displaystyle A,B\in {\mathcal {A}}}
, i.e.
vbl
(
A
)
∩
vbl
(
B
)
=
∅
{\displaystyle \operatorname {vbl} (A)\cap \operatorname {vbl} (B)=\emptyset }
, in parallel is equivalent to resampling A, B sequentially. Hence, at each iteration of the main loop one can determine the maximal set of independent and satisfied events S and resample all events in S in parallel.
Under the assumption that the assignment function x satisfies the slightly stronger conditions:
∀
A
∈
A
:
Pr
[
A
]
≤
(
1
−
ε
)
x
(
A
)
∏
B
∈
Γ
(
A
)
(
1
−
x
(
B
)
)
{\displaystyle \forall A\in {\mathcal {A}}:\Pr[A]\leq (1-\varepsilon )x(A)\prod _{B\in \Gamma (A)}(1-x(B))}
for some ε > 0 Moser and Tardos proved that the parallel algorithm achieves a better runtime complexity. In this case, the parallel version of the algorithm takes an expected
O
(
1
ε
log
∑
A
∈
A
x
(
A
)
1
−
x
(
A
)
)
{\displaystyle O\left({\frac {1}{\varepsilon }}\log \sum _{A\in {\mathcal {A}}}{\frac {x(A)}{1-x(A)}}\right)}
steps before it terminates. The parallel version of the algorithm can be seen as a special case of the sequential algorithm shown above, and so this result also holds for the sequential case.
== References == | Wikipedia/Algorithmic_Lovász_local_lemma |
Recurrence period density entropy (RPDE) is a method, in the fields of dynamical systems, stochastic processes, and time series analysis, for determining the periodicity, or repetitiveness of a signal.
== Overview ==
Recurrence period density entropy is useful for characterising the extent to which a time series repeats the same sequence, and is therefore similar to linear autocorrelation and time delayed mutual information, except that it measures repetitiveness in the phase space of the system, and is thus a more reliable measure based upon the dynamics of the underlying system that generated the signal. It has the advantage that it does not require the assumptions of linearity, Gaussianity or dynamical determinism. It has been successfully used to detect abnormalities in biomedical contexts such as speech signal.
The RPDE value
H
n
o
r
m
{\displaystyle \scriptstyle H_{\mathrm {norm} }}
is a scalar in the range zero to one. For purely periodic signals,
H
n
o
r
m
=
0
{\displaystyle \scriptstyle H_{\mathrm {norm} }=0}
, whereas for purely i.i.d., uniform white noise,
H
n
o
r
m
≈
1
{\displaystyle \scriptstyle H_{\mathrm {norm} }\approx 1}
.
== Method description ==
The RPDE method first requires the embedding of a time series in phase space, which, according to stochastic extensions to Taken's embedding theorems, can be carried out by forming time-delayed vectors:
X
n
=
[
x
n
,
x
n
+
τ
,
x
n
+
2
τ
,
…
,
x
n
+
(
M
−
1
)
τ
]
{\displaystyle \mathbf {X} _{n}=[x_{n},x_{n+\tau },x_{n+2\tau },\ldots ,x_{n+(M-1)\tau }]}
for each value xn in the time series, where M is the embedding dimension, and τ is the embedding delay. These parameters are obtained by systematic search for the optimal set (due to lack of practical embedding parameter techniques for stochastic systems) (Stark et al. 2003). Next, around each point
X
n
{\displaystyle \scriptstyle \mathbf {X} _{n}}
in the phase space, an
ε
{\displaystyle \varepsilon }
-neighbourhood (an m-dimensional ball with this radius) is formed, and every time the time series returns to this ball, after having left it, the time difference T between successive returns is recorded in a histogram. This histogram is normalised to sum to unity, to form an estimate of the recurrence period density function P(T). The normalised entropy of this density:
H
n
o
r
m
=
−
(
ln
T
max
)
−
1
∑
t
=
1
T
max
P
(
t
)
ln
P
(
t
)
{\displaystyle H_{\mathrm {norm} }=-(\ln {T_{\max })}^{-1}\sum _{t=1}^{T_{\max }}P(t)\ln {P(t)}}
is the RPDE value, where
T
max
{\displaystyle \scriptstyle T_{\max }}
is the largest recurrence value (typically on the order of 1000 samples). Note that RPDE is intended to be applied to both deterministic and stochastic signals, therefore, strictly speaking, Taken's original embedding theorem does not apply, and needs some modification.
== RPDE in practice ==
RPDE has the ability to detect subtle changes in natural biological time series such as the breakdown of regular periodic oscillation in abnormal cardiac function which are hard to detect using classical signal processing tools such as the Fourier transform or linear prediction. The recurrence period density is a sparse representation for nonlinear, non-Gaussian and nondeterministic signals, whereas the Fourier transform is only sparse for purely periodic signals.
== See also ==
Recurrence plot, a powerful visualisation tool of recurrences in dynamical (and other) systems.
Recurrence quantification analysis, another approach to quantify recurrence properties.
== References ==
== External links ==
Fast MATLAB code for calculating the RPDE value.
http://www.recurrence-plot.tk/ | Wikipedia/Recurrence_period_density_entropy |
In mathematics, specifically in the theory of Markovian stochastic processes in probability theory, the Chapman–Kolmogorov equation (CKE) is an identity relating the joint probability distributions of different sets of coordinates on a stochastic process. The equation was derived independently by both the British mathematician Sydney Chapman and the Russian mathematician Andrey Kolmogorov. The CKE is prominently used in recent variational Bayesian methods.
== Mathematical description ==
Suppose that { fi } is an indexed collection of random variables, that is, a stochastic process. Let
p
i
1
,
…
,
i
n
(
f
1
,
…
,
f
n
)
{\displaystyle p_{i_{1},\ldots ,i_{n}}(f_{1},\ldots ,f_{n})}
be the joint probability density function of the values of the random variables f1 to fn. Then, the Chapman–Kolmogorov equation is
p
i
1
,
…
,
i
n
−
1
(
f
1
,
…
,
f
n
−
1
)
=
∫
−
∞
∞
p
i
1
,
…
,
i
n
(
f
1
,
…
,
f
n
)
d
f
n
{\displaystyle p_{i_{1},\ldots ,i_{n-1}}(f_{1},\ldots ,f_{n-1})=\int _{-\infty }^{\infty }p_{i_{1},\ldots ,i_{n}}(f_{1},\ldots ,f_{n})\,df_{n}}
i.e. a straightforward marginalization over the nuisance variable.
(Note that nothing yet has been assumed about the temporal (or any other) ordering of the random variables—the above equation applies equally to the marginalization of any of them.)
=== In terms of Markov kernels ===
If we consider the Markov kernels induced by the transitions of a Markov process, the Chapman-Kolmogorov equation can be seen as giving a way of composing the kernel, generalizing the way stochastic matrices compose. Given a measurable space
(
X
,
A
)
{\displaystyle (X,{\mathcal {A}})}
and a Markov kernel
k
:
(
X
,
A
)
→
(
X
,
A
)
{\displaystyle k:(X,{\mathcal {A}})\to (X,{\mathcal {A}})}
, the two-step transition kernel
k
2
:
(
X
,
A
)
→
(
X
,
A
)
{\displaystyle k^{2}:(X,{\mathcal {A}})\to (X,{\mathcal {A}})}
is given by
k
2
(
A
|
x
)
=
∫
X
k
(
A
|
x
′
)
k
(
d
x
′
|
x
)
{\displaystyle k^{2}(A|x)=\int _{X}k(A|x')\,k(dx'|x)}
for all
x
∈
X
{\displaystyle x\in X}
and
A
∈
A
{\displaystyle A\in {\mathcal {A}}}
.
One can interpret this as a sum, over all intermediate states, of pairs of independent probabilistic transitions.
More generally, given measurable spaces
(
X
,
A
)
{\displaystyle (X,{\mathcal {A}})}
,
(
Y
,
B
)
{\displaystyle (Y,{\mathcal {B}})}
and
(
Z
,
C
)
{\displaystyle (Z,{\mathcal {C}})}
, and Markov kernels
k
:
(
X
,
A
)
→
(
Y
,
B
)
{\displaystyle k:(X,{\mathcal {A}})\to (Y,{\mathcal {B}})}
and
h
:
(
Y
,
B
)
→
(
Z
,
C
)
{\displaystyle h:(Y,{\mathcal {B}})\to (Z,{\mathcal {C}})}
, we get a composite kernel
h
∘
k
:
(
X
,
A
)
→
(
Z
,
C
)
{\displaystyle h\circ k:(X,{\mathcal {A}})\to (Z,{\mathcal {C}})}
by
(
h
∘
k
)
(
C
|
x
)
=
∫
Y
h
(
C
|
y
)
k
(
d
y
|
x
)
{\displaystyle (h\circ k)(C|x)=\int _{Y}h(C|y)\,k(dy|x)}
for all
x
∈
X
{\displaystyle x\in X}
and
C
∈
C
{\displaystyle C\in {\mathcal {C}}}
.
Because of this, Markov kernels, like stochastic matrices, form a category.
== Application to time-dilated Markov chains ==
When the stochastic process under consideration is Markovian, the Chapman–Kolmogorov equation is equivalent to an identity on transition densities. In the Markov chain setting, one assumes that i1 < ... < in. Then, because of the Markov property,
p
i
1
,
…
,
i
n
(
f
1
,
…
,
f
n
)
=
p
i
1
(
f
1
)
p
i
2
;
i
1
(
f
2
∣
f
1
)
⋯
p
i
n
;
i
n
−
1
(
f
n
∣
f
n
−
1
)
,
{\displaystyle p_{i_{1},\ldots ,i_{n}}(f_{1},\ldots ,f_{n})=p_{i_{1}}(f_{1})p_{i_{2};i_{1}}(f_{2}\mid f_{1})\cdots p_{i_{n};i_{n-1}}(f_{n}\mid f_{n-1}),}
where the conditional probability
p
i
;
j
(
f
i
∣
f
j
)
{\displaystyle p_{i;j}(f_{i}\mid f_{j})}
is the transition probability between the times
i
>
j
{\displaystyle i>j}
. So, the Chapman–Kolmogorov equation takes the form
p
i
3
;
i
1
(
f
3
∣
f
1
)
=
∫
−
∞
∞
p
i
3
;
i
2
(
f
3
∣
f
2
)
p
i
2
;
i
1
(
f
2
∣
f
1
)
d
f
2
.
{\displaystyle p_{i_{3};i_{1}}(f_{3}\mid f_{1})=\int _{-\infty }^{\infty }p_{i_{3};i_{2}}(f_{3}\mid f_{2})p_{i_{2};i_{1}}(f_{2}\mid f_{1})\,df_{2}.}
Informally, this says that the probability of going from state 1 to state 3 can be found from the probabilities of going from 1 to an intermediate state 2 and then from 2 to 3, by adding up over all the possible intermediate states 2.
When the probability distribution on the state space of a Markov chain is discrete and the Markov chain is homogeneous, the Chapman–Kolmogorov equations can be expressed in terms of (possibly infinite-dimensional) matrix multiplication, thus:
P
(
t
+
s
)
=
P
(
t
)
P
(
s
)
{\displaystyle P(t+s)=P(t)P(s)\,}
where P(t) is the transition matrix of jump t, i.e., P(t) is the matrix such that entry (i,j) contains the probability of the chain moving from state i to state j in t steps.
As a corollary, it follows that to calculate the transition matrix of jump t, it is sufficient to raise the transition matrix of jump one to the power of t, that is
P
(
t
)
=
P
t
.
{\displaystyle P(t)=P^{t}.\,}
The differential form of the Chapman–Kolmogorov equation is known as a master equation.
== See also ==
Fokker–Planck equation (also known as Kolmogorov forward equation)
Kolmogorov backward equation
Examples of Markov chains
Category of Markov kernels
== Citations ==
== Further reading ==
Pavliotis, Grigorios A. (2014). "Markov Processes and the Chapman–Kolmogorov Equation". Stochastic Processes and Applications. New York: Springer. pp. 33–38. ISBN 978-1-4939-1322-0.
Ross, Sheldon M. (2014). "Chapter 4.2: Chapman−Kolmogorov Equations". Introduction to Probability Models (11th ed.). Academic Press. p. 187. ISBN 978-0-12-407948-9.
Perrone, Paolo (2024). Starting Category Theory. World Scientific. pp. 10, 11. doi:10.1142/9789811286018_0005. ISBN 978-981-12-8600-1. | Wikipedia/Chapman–Kolmogorov_equation |
In probability theory, a conditional event algebra (CEA) is an alternative to a standard, Boolean algebra of possible events (a set of possible events related to one another by the familiar operations and, or, and not) that contains not just ordinary events but also conditional events that have the form "if A, then B". The usual motivation for a CEA is to ground the definition of a probability function for events, P, that satisfies the equation P(if A then B) = P(A and B) / P(A).
== Motivation ==
In standard probability theory the occurrence of an event corresponds to a set of possible outcomes, each of which is an outcome that corresponds to the occurrence of the event. P(A), the probability of event A, is the sum of the probabilities of all outcomes that correspond to event A; P(B) is the sum of the probabilities of all outcomes that correspond to event B; and P(A and B) is the sum of the probabilities of all outcomes that correspond to both A and B. In other words, and, customarily represented by the logical symbol ∧, is interpreted as set intersection: P(A ∧ B) = P(A ∩ B). In the same vein, or, ∨, becomes set union, ∪, and not, ¬, becomes set complementation, ′. Any combination of events using the operations and, or, and not is also an event, and assigning probabilities to all outcomes generates a probability for every event. In technical terms, this means that the set of events and the three operations together constitute a Boolean algebra of sets, with an associated probability function.
In standard practice, P(if A, then B) is not interpreted as P(A′ ∪ B), following the rule of material implication, but rather as the conditional probability of B given A, P(B | A) = P(A ∩ B) / P(A). This raises a question: what about a probability like P(if A, then B, and if C, then D)? For this, there is no standard answer. What would be needed, for consistency, is a treatment of if-then as a binary operation, →, such that for conditional events A → B and C → D, P(A → B) = P(B | A), P(C → D) = P(D | C), and P((A → B) ∧ (C → D)) are well-defined and reasonable. Philosophers including Robert Stalnaker argued that ideally, a conditional event algebra, or CEA, would support a probability function that meets three conditions:
1. The probability function satisfies the usual axioms.
2. For any two ordinary events A and B, if P(A) > 0, then P(A → B) = P(B | A) = P(A ∧ B) / P(A).
3. For ordinary event A and acceptable probability function P, if P(A) > 0, then PA = P ( ⋅ | A), the function produced by conditioning on A, is also an acceptable probability function.
However, David Lewis proved in 1976 a fact now known as Lewis's triviality result: these conditions can only be met with near-standard approaches in trivial examples. In particular, those conditions can only be met when there are just two possible outcomes—as with, say, a single coin flip. With three or more possible outcomes, constructing a probability function requires choosing which of the above three conditions to violate. Interpreting A → B as A′ ∪ B produces an ordinary Boolean algebra that violates 2. With CEAs, the choice is between 1 and 3.
== Types of conditional event algebra ==
=== Tri-event CEAs ===
Tri-event CEAs take their inspiration from three-valued logic, where the identification of logical conjunction, disjunction, and negation with simple set operations no longer applies. For ordinary events A and B, the tri-event A → B occurs when A and B both occur, fails to occur when A occurs but B does not, and is undecided when A fails to occur. (The term “tri-event” comes from de Finetti (1935): triévénement.) Ordinary events, which are never undecided, are incorporated into the algebra as tri-events conditional on Ω, the vacuous event represented by the entire sample space of outcomes; thus, A becomes Ω → A.
Since there are many three-valued logics, there are many possible tri-event algebras. Two types, however, have attracted more interest than the others. In one type, A ∧ B and A ∨ B are each undecided only when both A and B are undecided; when just one of them is, the conjunction or disjunction follows the other conjunct or disjunct. When negation is handled in the obvious way, with ¬A undecided just in case A is, this type of tri-event algebra corresponds to a three-valued logic proposed by Sobociński (1920) and favored by Belnap (1973), and also implied by Adams’s (1975) “quasi-conjunction” for conditionals. Schay (1968) was the first to propose an algebraic treatment, which Calabrese (1987) developed more properly.
The other type of tri-event CEA treats negation the same way as the first, but it treats conjunction and disjunction as min and max functions, respectively, with occurrence as the high value, failure as the low value, and undecidedness in between. This type of tri-event algebra corresponds to a three-valued logic proposed by Łukasiewicz (1920) and also favored by de Finetti (1935). Goodman, Nguyen and Walker (1991) eventually provided the algebraic formulation.
The probability of any tri-event is defined as the probability that it occurs divided by the probability that it either occurs or fails to occur. With this convention, conditions 2 and 3 above are satisfied by the two leading tri-event CEA types. Condition 1, however, fails. In a Sobociński-type algebra, ∧ does not distribute over ∨, so P(A ∧ (B ∨ C)) and P((A ∧ B) ∨ (A ∧ C)) need not be equal. In a Łukasiewicz-type algebra, ∧ distributes over ∨ but not over exclusive or,
⊕
{\displaystyle \oplus }
(A
⊕
{\displaystyle \oplus }
B = (A ∧ ¬B) ∨ (¬A ∧ B)). Also, tri-event CEAs are not complemented lattices, only pseudocomplemented, because in general, (A → B) ∧ ¬(A → B) cannot occur but can be undecided and therefore is not identical to Ω → ∅, the bottom element of the lattice. This means that P(C) and P(C
⊕
{\displaystyle \oplus }
((A → B) ∧ ¬(A → B))) can differ, when classically they would not.
=== Product-space CEAs ===
If P(if A, then B) is thought of as the probability of A-and-B occurring before A-and-not-B in a series of trials, this can be calculated as an infinite sum of simple probabilities: the probability of A-and-B on the first trial, plus the probability of not-A (and either B or not-B) on the first trial and A-and-B on the second, plus the probability of not-A on the first two trials and A-and-B on the third, and so on—that is, P(A ∧ B) + P(¬A)P(A ∧ B) + P(¬A)2P(A ∧ B) + …, or, in factored form, P(A ∧ B)[1 + P(¬A) + P(¬A)2 + …]. Since the second factor is the Maclaurin series expansion of 1 / [1 – P(¬A)] = 1 / P(A), the infinite sum equals P(A ∧ B) / P(A) = P(B |A).
The infinite sum is itself is a simple probability, but with the sample space now containing not ordinary outcomes of single trials but infinite sequences of ordinary outcomes. Thus the conditional probability P(B |A) is turned into simple probability P(B → A) by replacing Ω, the sample space of all ordinary outcomes, with Ω*, the sample space of all sequences of ordinary outcomes, and by identifying conditional event A → B with the set of sequences where the first (A ∧ B)-outcome comes before the first (A ∧ ¬B)-outcome. In Cartesian-product notation, Ω* = Ω × Ω × Ω × …, and A → B is the infinite union [(A ∩ B) × Ω × Ω × …] ∪ [A′ × (A ∩ B) × Ω × Ω × …] ∪ [A′ × A′ × (A ∩ B) × Ω × Ω × …] ∪ …. Unconditional event A is, again, represented by conditional event Ω → A. Unlike tri-event CEAs, this type of CEA supports the identification of ∧, ∨, and ¬ with the familiar operations ∩, ∪, and ′ not just for ordinary, unconditional events but for conditional ones, as well. Because Ω* is a space defined by an infinitely long Cartesian product, the Boolean algebra of conditional-event subsets of Ω* is called a product-space CEA. This type of CEA was introduced by van Fraassen (1976), in response to Lewis’s result, and was later discovered independently by Goodman and Nguyen (1994).
The probability functions associated with product-space CEAs satisfy conditions 1 and 2 above. However, given probability function P that satisfies conditions 1 and 2, if P(A) > 0, it can be shown that PA(C | B) = P(C | A ∧ B) and PA(B → C) = P(B ∧ C | A) + P(B′ | A)P(C | B). If A, B and C are pairwise compatible but P(A ∧ B ∧ C) = 0, then P(C | A ∧ B) = P(B ∧ C | A) = 0 but P(B′ | A)P (C | B) > 0. Therefore, PA(B → C) does not reliably equal PA(C | B). Since PA fails condition 2, P fails condition 3.
=== Nested if–thens ===
What about nested conditional constructions? In a tri-event CEA, right-nested constructions are handled more or less automatically, since it is natural to say that A → (B → C) takes the value of B → C (possibly undecided) when A is true and is undecided when A is false. Left-nesting, however, requires a more deliberate choice: when A → B is undecided, should (A → B) → C be undecided, or should it take the value of C? Opinions vary. Calabrese adopts the latter view, identifying (A → B) → (C → D) with ((¬A ∨ B) ∧ C) → D.
With a product-space CEA, nested conditionals call for nested sequence-constructions: evaluating P((A → B) → (C → D)) requires a sample space of metasequences of sequences of ordinary outcomes. The probabilities of the ordinary sequences are calculated as before. Given a series of trials where the outcomes are sequences of ordinary outcomes, P((A → B) → (C → D)) is P(C → D | A → B) = P((A → B) ∧ (C → D)) / P(A → B), the probability that an ((A → B) ∧ (C → B))-sequence will be encountered before an ((A → B) ∧ ¬(C → B))-sequence. Higher-order-iterations of conditionals require higher-order metasequential constructions.
In either of the two leading types of tri-event CEA, A → (B → C) = (A ∧ B) → C. Product space CEAs, on the other hand, do not support this identity. The latter fact can be inferred from the failure, already noted, of PA(B → C) to equal PA(C | B), since PA(C | B) = P((A ∧ B) → C) and PA(B → C) = P(A → (B → C)). For a direct analysis, however, consider a metasequence whose first member-sequence starts with an (A ∧ ¬B ∧ C)-outcome, followed by a (¬A ∧ B ∧ C)-outcome, followed by an (A ∧ B ∧ ¬C)-outcome. That metasequence will belong to the event A → (B → C), because the first member-sequence is an (A ∧ (B → C))-sequence, but the metasequence will not belong to the event (A ∧ B) → C, because the first member-sequence is an ((A ∧ B) → ¬C)-sequence.
== Applications ==
The initial impetus for CEAs is theoretical—namely, the challenge of responding to Lewis's triviality result—but practical applications have been proposed. If, for instance, events A and C involve signals emitted by military radar stations and events B and D involve missile launches, an opposing military force with an automated missile defense system may want the system to be able to calculate P((A → B) ∧ (C → D)) and/or P((A → B) → (C → D)). Other applications range from image interpretation to the detection of denial-of-service attacks on computer networks.
== Notes ==
== References ==
Adams, E. W. 1975. The Logic of Conditionals. D. Reidel, Dordrecht.
Bamber, D., Goodman, I. R. and Nguyen, H. T. 2004. "Deduction from Conditional Knowledge". Soft Computing 8: 247–255.
Belnap, N. D. 1973. "Restricted quantification and conditional assertion", in H. Leblanc (ed.), Truth, Syntax and Modality North-Holland, Amsterdam. 48–75.
Calabrese, P. 1987. "An algebraic synthesis of the foundations of logic and probability". Information Sciences 42:187-237.
de Finetti, Bruno. 1935. "La logique de la probabilité". Actes du Congrès International Philosophie Scientifique. Paris.
van Fraassen, Bas C. 1976. "Probabilities of conditionals” in W. L. Harper and C. A. Hooker (eds.), Foundations of Probability Theory, Statistical Inference, and Statistical Theories of Science, Vol. I. D. Reidel, Dordrecht, pp. 261–308.
Goodman, I. R., Mahler, R. P. S. and Nguyen, H. T. 1999. "What is conditional event algebra and why should you care?" SPIE Proceedings, Vol. 3720.
Goodman, I. R., Nguyen, H. T. and Walker, E .A. 1991. Conditional Inference and Logic for Intelligent Systems: A Theory of Measure-Free Conditioning. Office of Chief of Naval Research, Arlington, Virginia.
Goodman, I. R. and Nguyen, H. T. 1994. "A theory of conditional information for probabilistic inference in intelligent systems: II, Product space approach; III Mathematical appendix". Information Sciences 76:13-42; 75: 253-277.
Goodman, I. R. and Nguyen, H. T. 1995. "Mathematical foundations of conditionals and their probabilistic assignments". International Journal of Uncertainty, Fuzziness and Knowledge-based Systems 3(3): 247-339
Kelly, P. A., Derin, H., and Gong, W.-B. 1999. "Some applications of conditional events and random sets for image estimation and system modeling". SPIE Proceedings 3720: 14-24.
Łukasiewicz, J. 1920. "O logice trójwartościowej" (in Polish). Ruch Filozoficzny 5:170–171. English translation: "On three-valued logic", in L. Borkowski (ed.), Selected works by Jan Łukasiewicz, North–Holland, Amsterdam, 1970, pp. 87–88. ISBN 0-7204-2252-3
Schay, Geza. 1968. "An algebra of conditional events". Journal of Mathematical Analysis and Applications 24: 334-344.
Sobociński, B. 1952. "Axiomatization of a partial system of three-valued calculus of propositions". Journal of Computing Systems 1(1):23-55.
Sun, D., Yang, K., Jing, X., Lv, B., and Wang, Y. 2014. "Abnormal network traffic detection based on conditional event algebra". Applied Mechanics and Materials 644-650: 1093-1099. | Wikipedia/Goodman–Nguyen–van_Fraassen_algebra |
In mathematics, Weingarten functions are rational functions indexed by partitions of integers that can be used to calculate integrals of products of matrix coefficients over classical groups. They were first studied by Weingarten (1978) who found their asymptotic behavior, and named by Collins (2003), who evaluated them explicitly for the unitary group.
== Unitary groups ==
Weingarten functions are used for evaluating integrals over the unitary group Ud
of products of matrix coefficients of the form
∫
U
d
U
i
1
j
1
⋯
U
i
q
j
q
U
i
1
′
j
1
′
∗
⋯
U
i
q
′
j
q
′
∗
d
U
,
{\displaystyle \int _{U_{d}}U_{i_{1}j_{1}}\cdots U_{i_{q}j_{q}}U_{i_{1}^{\prime }j_{1}^{\prime }}^{*}\cdots U_{i_{q}^{\prime }j_{q}^{\prime }}^{*}dU,}
where
∗
{\displaystyle *}
denotes complex conjugation. Note that
U
j
i
∗
=
(
U
†
)
i
j
{\displaystyle U_{ji}^{*}=(U^{\dagger })_{ij}}
where
U
†
{\displaystyle U^{\dagger }}
is the conjugate transpose of
U
{\displaystyle U}
, so one can interpret the above expression as being for the
i
1
j
1
…
i
q
j
q
j
1
′
i
1
′
…
j
q
′
i
q
′
{\displaystyle i_{1}j_{1}\ldots i_{q}j_{q}j'_{1}i'_{1}\ldots j'_{q}i'_{q}}
matrix element of
U
⊗
⋯
⊗
U
⊗
U
†
⊗
⋯
⊗
U
†
{\displaystyle U\otimes \cdots \otimes U\otimes U^{\dagger }\otimes \cdots \otimes U^{\dagger }}
.
This integral is equal to
∑
σ
,
τ
∈
S
q
δ
i
1
i
σ
(
1
)
′
⋯
δ
i
q
i
σ
(
q
)
′
δ
j
1
j
τ
(
1
)
′
⋯
δ
j
q
j
τ
(
q
)
′
W
g
(
σ
τ
−
1
,
d
)
{\displaystyle \sum _{\sigma ,\tau \in S_{q}}\delta _{i_{1}i_{\sigma (1)}^{\prime }}\cdots \delta _{i_{q}i_{\sigma (q)}^{\prime }}\delta _{j_{1}j_{\tau (1)}^{\prime }}\cdots \delta _{j_{q}j_{\tau (q)}^{\prime }}W\!g(\sigma \tau ^{-1},d)}
where Wg is the Weingarten function, given by
W
g
(
σ
,
d
)
=
1
q
!
2
∑
λ
χ
λ
(
1
)
2
χ
λ
(
σ
)
s
λ
,
d
(
1
)
{\displaystyle W\!g(\sigma ,d)={\frac {1}{q!^{2}}}\sum _{\lambda }{\frac {\chi ^{\lambda }(1)^{2}\chi ^{\lambda }(\sigma )}{s_{\lambda ,d}(1)}}}
where the sum is over all partitions λ of q (Collins 2003). Here χλ is the character of Sq corresponding to the partition λ and s is the Schur polynomial of λ, so that sλd(1) is the dimension of the representation of Ud corresponding to λ.
The Weingarten functions are rational functions in d. They can have poles for small values of d, which cancel out in the formula above. There is an alternative inequivalent definition of Weingarten functions, where one only sums over partitions with at most d parts. This is no longer a rational function of d, but is finite for all positive integers d. The two sorts of Weingarten functions coincide for d larger than q, and either can be used in the formula for the integral.
=== Values of the Weingarten function for simple permutations ===
The first few Weingarten functions Wg(σ, d) are
W
g
(
,
d
)
=
1
{\displaystyle \displaystyle W\!g(,d)=1}
(The trivial case where q = 0)
W
g
(
1
,
d
)
=
1
d
{\displaystyle \displaystyle W\!g(1,d)={\frac {1}{d}}}
W
g
(
2
,
d
)
=
−
1
d
(
d
2
−
1
)
{\displaystyle \displaystyle W\!g(2,d)={\frac {-1}{d(d^{2}-1)}}}
W
g
(
1
2
,
d
)
=
1
d
2
−
1
{\displaystyle \displaystyle W\!g(1^{2},d)={\frac {1}{d^{2}-1}}}
W
g
(
3
,
d
)
=
2
d
(
d
2
−
1
)
(
d
2
−
4
)
{\displaystyle \displaystyle W\!g(3,d)={\frac {2}{d(d^{2}-1)(d^{2}-4)}}}
W
g
(
21
,
d
)
=
−
1
(
d
2
−
1
)
(
d
2
−
4
)
{\displaystyle \displaystyle W\!g(21,d)={\frac {-1}{(d^{2}-1)(d^{2}-4)}}}
W
g
(
1
3
,
d
)
=
d
2
−
2
d
(
d
2
−
1
)
(
d
2
−
4
)
{\displaystyle \displaystyle W\!g(1^{3},d)={\frac {d^{2}-2}{d(d^{2}-1)(d^{2}-4)}}}
W
g
(
4
,
d
)
=
−
5
d
(
d
2
−
1
)
(
d
2
−
4
)
(
d
2
−
9
)
{\displaystyle \displaystyle W\!g(4,d)={\frac {-5}{d(d^{2}-1)(d^{2}-4)(d^{2}-9)}}}
W
g
(
31
,
d
)
=
2
d
2
−
3
d
2
(
d
2
−
1
)
(
d
2
−
4
)
(
d
2
−
9
)
{\displaystyle \displaystyle W\!g(31,d)={\frac {2d^{2}-3}{d^{2}(d^{2}-1)(d^{2}-4)(d^{2}-9)}}}
W
g
(
2
2
,
d
)
=
d
2
+
6
d
2
(
d
2
−
1
)
(
d
2
−
4
)
(
d
2
−
9
)
{\displaystyle \displaystyle W\!g(2^{2},d)={\frac {d^{2}+6}{d^{2}(d^{2}-1)(d^{2}-4)(d^{2}-9)}}}
W
g
(
21
2
,
d
)
=
−
1
d
(
d
2
−
1
)
(
d
2
−
9
)
{\displaystyle \displaystyle W\!g(21^{2},d)={\frac {-1}{d(d^{2}-1)(d^{2}-9)}}}
W
g
(
1
4
,
d
)
=
d
4
−
8
d
2
+
6
d
2
(
d
2
−
1
)
(
d
2
−
4
)
(
d
2
−
9
)
{\displaystyle \displaystyle W\!g(1^{4},d)={\frac {d^{4}-8d^{2}+6}{d^{2}(d^{2}-1)(d^{2}-4)(d^{2}-9)}}}
where permutations σ are denoted by their cycle shapes.
There exist computer algebra programs to produce these expressions.
=== Explicit expressions for the integrals in the first cases ===
The explicit expressions for the integrals of first- and second-degree polynomials, obtained via the formula above, are:
∫
U
d
d
U
U
i
j
U
¯
k
ℓ
=
δ
i
k
δ
j
ℓ
Wg
(
1
,
d
)
=
δ
i
k
δ
j
ℓ
d
.
{\displaystyle \int _{U_{d}}dUU_{ij}{\bar {U}}_{k\ell }=\delta _{ik}\delta _{j\ell }\operatorname {Wg} (1,d)={\frac {\delta _{ik}\delta _{j\ell }}{d}}.}
∫
U
d
d
U
U
i
j
U
k
ℓ
U
¯
m
n
U
¯
p
q
=
(
δ
i
m
δ
j
n
δ
k
p
δ
ℓ
q
+
δ
i
p
δ
j
q
δ
k
m
δ
ℓ
n
)
Wg
(
1
2
,
d
)
+
(
δ
i
m
δ
j
q
δ
k
p
δ
ℓ
n
+
δ
i
p
δ
j
n
δ
k
m
δ
ℓ
q
)
Wg
(
2
,
d
)
.
{\displaystyle \int _{U_{d}}dUU_{ij}U_{k\ell }{\bar {U}}_{mn}{\bar {U}}_{pq}=(\delta _{im}\delta _{jn}\delta _{kp}\delta _{\ell q}+\delta _{ip}\delta _{jq}\delta _{km}\delta _{\ell n})\operatorname {Wg} (1^{2},d)+(\delta _{im}\delta _{jq}\delta _{kp}\delta _{\ell n}+\delta _{ip}\delta _{jn}\delta _{km}\delta _{\ell q})\operatorname {Wg} (2,d).}
=== Asymptotic behavior ===
For large d, the Weingarten function Wg has the asymptotic behavior
W
g
(
σ
,
d
)
=
d
−
n
−
|
σ
|
∏
i
(
−
1
)
|
C
i
|
−
1
c
|
C
i
|
−
1
+
O
(
d
−
n
−
|
σ
|
−
2
)
{\displaystyle W\!g(\sigma ,d)=d^{-n-|\sigma |}\prod _{i}(-1)^{|C_{i}|-1}c_{|C_{i}|-1}+O(d^{-n-|\sigma |-2})}
where the permutation σ is a product of cycles of lengths Ci, and cn = (2n)!/n!(n + 1)! is a Catalan number, and |σ| is the smallest number of transpositions that σ is a product of. There exists a diagrammatic method to systematically calculate the integrals over the unitary group as a power series in 1/d.
== Orthogonal and symplectic groups ==
For orthogonal and symplectic groups the Weingarten functions were evaluated by Collins & Śniady (2006). Their theory is similar to the case of the unitary group. They are parameterized by partitions such that all parts have even size.
== External links ==
Collins, Benoît (2003), "Moments and cumulants of polynomial random variables on unitary groups, the Itzykson-Zuber integral, and free probability", International Mathematics Research Notices, 2003 (17): 953–982, arXiv:math-ph/0205010, doi:10.1155/S107379280320917X, MR 1959915
Collins, Benoît; Śniady, Piotr (2006), "Integration with respect to the Haar measure on unitary, orthogonal and symplectic group", Communications in Mathematical Physics, 264 (3): 773–795, arXiv:math-ph/0402073, Bibcode:2006CMaPh.264..773C, doi:10.1007/s00220-006-1554-3, MR 2217291, S2CID 16122807
Weingarten, Don (1978), "Asymptotic behavior of group integrals in the limit of infinite rank", Journal of Mathematical Physics, 19 (5): 999–1001, Bibcode:1978JMP....19..999W, doi:10.1063/1.523807, MR 0471696
== References == | Wikipedia/Weingarten_function |
In probability theory, Wald's equation, Wald's identity or Wald's lemma is an important identity that simplifies the calculation of the expected value of the sum of a random number of random quantities. In its simplest form, it relates the expectation of a sum of randomly many finite-mean, independent and identically distributed random variables to the expected number of terms in the sum and the random variables' common expectation under the condition that the number of terms in the sum is independent of the summands.
The equation is named after the mathematician Abraham Wald. An identity for the second moment is given by the Blackwell–Girshick equation.
== Basic version ==
Let (Xn)n∈
N
{\displaystyle \mathbb {N} }
be a sequence of real-valued, independent and identically distributed random variables and let N ≥ 0 be an integer-valued random variable that is independent of the sequence (Xn)n∈
N
{\displaystyle \mathbb {N} }
. Suppose that N and the Xn have finite expectations. Then
E
[
X
1
+
⋯
+
X
N
]
=
E
[
N
]
E
[
X
1
]
.
{\displaystyle \operatorname {E} [X_{1}+\dots +X_{N}]=\operatorname {E} [N]\operatorname {E} [X_{1}]\,.}
== Example ==
Roll a six-sided dice. Take the number on the die (call it N) and roll that number of six-sided dice to get the numbers X1, . . . , XN, and add up their values. By Wald's equation, the resulting value on average is
E
[
N
]
E
[
X
]
=
1
+
2
+
3
+
4
+
5
+
6
6
⋅
1
+
2
+
3
+
4
+
5
+
6
6
=
441
36
=
49
4
=
12.25
.
{\displaystyle \operatorname {E} [N]\operatorname {E} [X]={\frac {1+2+3+4+5+6}{6}}\cdot {\frac {1+2+3+4+5+6}{6}}={\frac {441}{36}}={\frac {49}{4}}=12.25\,.}
== General version ==
Let (Xn)n∈
N
{\displaystyle \mathbb {N} }
be an infinite sequence of real-valued random variables and let N be a nonnegative integer-valued random variable.
Assume that:
1. (Xn)n∈
N
{\displaystyle \mathbb {N} }
are all integrable (finite-mean) random variables,
2. E[Xn1{N ≥ n}] = E[Xn] P(N ≥ n) for every natural number n, and
3. the infinite series satisfies
∑
n
=
1
∞
E
[
|
X
n
|
1
{
N
≥
n
}
]
<
∞
.
{\displaystyle \sum _{n=1}^{\infty }\operatorname {E} \!{\bigl [}|X_{n}|1_{\{N\geq n\}}{\bigr ]}<\infty .}
Then the random sums
S
N
:=
∑
n
=
1
N
X
n
,
T
N
:=
∑
n
=
1
N
E
[
X
n
]
{\displaystyle S_{N}:=\sum _{n=1}^{N}X_{n},\qquad T_{N}:=\sum _{n=1}^{N}\operatorname {E} [X_{n}]}
are integrable and
E
[
S
N
]
=
E
[
T
N
]
.
{\displaystyle \operatorname {E} [S_{N}]=\operatorname {E} [T_{N}].}
If, in addition,
4. (Xn)n∈
N
{\displaystyle \mathbb {N} }
all have the same expectation, and
5. N has finite expectation,
then
E
[
S
N
]
=
E
[
N
]
E
[
X
1
]
.
{\displaystyle \operatorname {E} [S_{N}]=\operatorname {E} [N]\,\operatorname {E} [X_{1}].}
Remark: Usually, the name Wald's equation refers to this last equality.
== Discussion of assumptions ==
Clearly, assumption (1) is needed to formulate assumption (2) and Wald's equation. Assumption (2) controls the amount of dependence allowed between the sequence (Xn)n∈
N
{\displaystyle \mathbb {N} }
and the number N of terms; see the counterexample below for the necessity. Note that assumption (2) is satisfied when N is a stopping time for a sequence of independent random variables (Xn)n∈
N
{\displaystyle \mathbb {N} }
. Assumption (3) is of more technical nature, implying absolute convergence and therefore allowing arbitrary rearrangement of an infinite series in the proof.
If assumption (5) is satisfied, then assumption (3) can be strengthened to the simpler condition
6. there exists a real constant C such that E[|Xn| 1{N ≥ n}] ≤ C P(N ≥ n) for all natural numbers n.
Indeed, using assumption (6),
∑
n
=
1
∞
E
[
|
X
n
|
1
{
N
≥
n
}
]
≤
C
∑
n
=
1
∞
P
(
N
≥
n
)
,
{\displaystyle \sum _{n=1}^{\infty }\operatorname {E} \!{\bigl [}|X_{n}|1_{\{N\geq n\}}{\bigr ]}\leq C\sum _{n=1}^{\infty }\operatorname {P} (N\geq n),}
and the last series equals the expectation of N [Proof], which is finite by assumption (5). Therefore, (5) and (6) imply assumption (3).
Assume in addition to (1) and (5) that
7. N is independent of the sequence (Xn)n∈
N
{\displaystyle \mathbb {N} }
and
8. there exists a constant C such that E[|Xn|] ≤ C for all natural numbers n.
Then all the assumptions (1), (2), (5) and (6), hence also (3) are satisfied. In particular, the conditions (4) and (8) are satisfied if
9. the random variables (Xn)n∈
N
{\displaystyle \mathbb {N} }
all have the same distribution.
Note that the random variables of the sequence (Xn)n∈
N
{\displaystyle \mathbb {N} }
don't need to be independent.
The interesting point is to admit some dependence between the random number N of terms and the sequence (Xn)n∈
N
{\displaystyle \mathbb {N} }
. A standard version is to assume (1), (5), (8) and the existence of a filtration (Fn)n∈
N
{\displaystyle \mathbb {N} }
0 such that
10. N is a stopping time with respect to the filtration, and
11. Xn and Fn–1 are independent for every n ∈
N
{\displaystyle \mathbb {N} }
.
Then (10) implies that the event {N ≥ n} = {N ≤ n – 1}c is in Fn–1, hence by (11) independent of Xn. This implies (2), and together with (8) it implies (6).
For convenience (see the proof below using the optional stopping theorem) and to specify the relation of the sequence (Xn)n∈
N
{\displaystyle \mathbb {N} }
and the filtration (Fn)n∈
N
{\displaystyle \mathbb {N} }
0, the following additional assumption is often imposed:
12. the sequence (Xn)n∈
N
{\displaystyle \mathbb {N} }
is adapted to the filtration (Fn)n∈
N
{\displaystyle \mathbb {N} }
, meaning the Xn is Fn-measurable for every n ∈
N
{\displaystyle \mathbb {N} }
.
Note that (11) and (12) together imply that the random variables (Xn)n∈
N
{\displaystyle \mathbb {N} }
are independent.
== Application ==
An application is in actuarial science when considering the total claim amount follows a compound Poisson process
S
N
=
∑
n
=
1
N
X
n
{\displaystyle S_{N}=\sum _{n=1}^{N}X_{n}}
within a certain time period, say one year, arising from a random number N of individual insurance claims, whose sizes are described by the random variables (Xn)n∈
N
{\displaystyle \mathbb {N} }
. Under the above assumptions, Wald's equation can be used to calculate the expected total claim amount when information about the average claim number per year and the average claim size is available. Under stronger assumptions and with more information about the underlying distributions, Panjer's recursion can be used to calculate the distribution of SN.
== Examples ==
=== Example with dependent terms ===
Let N be an integrable,
N
{\displaystyle \mathbb {N} }
0-valued random variable, which is independent of the integrable, real-valued random variable Z with E[Z] = 0. Define Xn = (–1)n Z for all n ∈
N
{\displaystyle \mathbb {N} }
. Then assumptions (1), (5), (7), and (8) with C := E[|Z|] are satisfied, hence also (2) and (6), and Wald's equation applies. If the distribution of Z is not symmetric, then (9) does not hold. Note that, when Z is not almost surely equal to the zero random variable, then (11) and (12) cannot hold simultaneously for any filtration (Fn)n∈
N
{\displaystyle \mathbb {N} }
, because Z cannot be independent of itself as E[Z2] = (E[Z])2 = 0 is impossible.
=== Example where the number of terms depends on the sequence ===
Let (Xn)n∈
N
{\displaystyle \mathbb {N} }
be a sequence of independent, symmetric, and {–1, +1}-valued random variables. For every n ∈
N
{\displaystyle \mathbb {N} }
let Fn be the σ-algebra generated by X1, . . . , Xn and define N = n when Xn is the first random variable taking the value +1. Note that P(N = n) = 1/2n, hence E[N] < ∞ by the ratio test. The assumptions (1), (5) and (9), hence (4) and (8) with C = 1, (10), (11), and (12) hold, hence also (2), and (6) and Wald's equation applies. However, (7) does not hold, because N is defined in terms of the sequence (Xn)n∈
N
{\displaystyle \mathbb {N} }
. Intuitively, one might expect to have E[SN] > 0 in this example, because the summation stops right after a one, thereby apparently creating a positive bias. However, Wald's equation shows that this intuition is misleading.
== Counterexamples ==
=== A counterexample illustrating the necessity of assumption (2) ===
Consider a sequence (Xn)n∈
N
{\displaystyle \mathbb {N} }
of i.i.d. (Independent and identically distributed random variables) random variables, taking each of the two values 0 and 1 with probability 1/2 (actually, only X1 is needed in the following). Define N = 1 – X1. Then SN is identically equal to zero, hence E[SN] = 0, but E[X1] = 1/2 and E[N] = 1/2 and therefore Wald's equation does not hold. Indeed, the assumptions (1), (3), (4) and (5) are satisfied, however, the equation in assumption (2) holds for all n ∈
N
{\displaystyle \mathbb {N} }
except for n = 1.
=== A counterexample illustrating the necessity of assumption (3) ===
Very similar to the second example above, let (Xn)n∈
N
{\displaystyle \mathbb {N} }
be a sequence of independent, symmetric random variables, where Xn takes each of the values 2n and –2n with probability 1/2. Let N be the first n ∈
N
{\displaystyle \mathbb {N} }
such that Xn = 2n. Then, as above, N has finite expectation, hence assumption (5) holds. Since E[Xn] = 0 for all n ∈
N
{\displaystyle \mathbb {N} }
, assumptions (1) and (4) hold. However, since SN = 1 almost surely, Wald's equation cannot hold.
Since N is a stopping time with respect to the filtration generated by (Xn)n∈
N
{\displaystyle \mathbb {N} }
, assumption (2) holds, see above. Therefore, only assumption (3) can fail, and indeed, since
{
N
≥
n
}
=
{
X
i
=
−
2
i
for
i
=
1
,
…
,
n
−
1
}
{\displaystyle \{N\geq n\}=\{X_{i}=-2^{i}{\text{ for }}i=1,\ldots ,n-1\}}
and therefore P(N ≥ n) = 1/2n–1 for every n ∈
N
{\displaystyle \mathbb {N} }
, it follows that
∑
n
=
1
∞
E
[
|
X
n
|
1
{
N
≥
n
}
]
=
∑
n
=
1
∞
2
n
P
(
N
≥
n
)
=
∑
n
=
1
∞
2
=
∞
.
{\displaystyle \sum _{n=1}^{\infty }\operatorname {E} \!{\bigl [}|X_{n}|1_{\{N\geq n\}}{\bigr ]}=\sum _{n=1}^{\infty }2^{n}\,\operatorname {P} (N\geq n)=\sum _{n=1}^{\infty }2=\infty .}
== A proof using the optional stopping theorem ==
Assume (1), (5), (8), (10), (11) and (12). Using assumption (1), define the sequence of random variables
M
n
=
∑
i
=
1
n
(
X
i
−
E
[
X
i
]
)
,
n
∈
N
0
.
{\displaystyle M_{n}=\sum _{i=1}^{n}(X_{i}-\operatorname {E} [X_{i}]),\quad n\in {\mathbb {N} }_{0}.}
Assumption (11) implies that the conditional expectation of Xn given Fn–1 equals E[Xn] almost surely for every n ∈
N
{\displaystyle \mathbb {N} }
, hence (Mn)n∈
N
{\displaystyle \mathbb {N} }
0 is a martingale with respect to the filtration (Fn)n∈
N
{\displaystyle \mathbb {N} }
0 by assumption (12). Assumptions (5), (8) and (10) make sure that we can apply the optional stopping theorem, hence MN = SN – TN is integrable and
Due to assumption (8),
|
T
N
|
=
|
∑
i
=
1
N
E
[
X
i
]
|
≤
∑
i
=
1
N
E
[
|
X
i
|
]
≤
C
N
,
{\displaystyle |T_{N}|={\biggl |}\sum _{i=1}^{N}\operatorname {E} [X_{i}]{\biggr |}\leq \sum _{i=1}^{N}\operatorname {E} [|X_{i}|]\leq CN,}
and due to assumption (5) this upper bound is integrable. Hence we can add the expectation of TN to both sides of Equation (13) and obtain by linearity
E
[
S
N
]
=
E
[
T
N
]
.
{\displaystyle \operatorname {E} [S_{N}]=\operatorname {E} [T_{N}].}
Remark: Note that this proof does not cover the above example with dependent terms.
== General proof ==
This proof uses only Lebesgue's monotone and dominated convergence theorems.
We prove the statement as given above in three steps.
=== Step 1: Integrability of the random sum SN ===
We first show that the random sum SN is integrable. Define the partial sums
Since N takes its values in
N
{\displaystyle \mathbb {N} }
0 and since S0 = 0, it follows that
|
S
N
|
=
∑
i
=
1
∞
|
S
i
|
1
{
N
=
i
}
.
{\displaystyle |S_{N}|=\sum _{i=1}^{\infty }|S_{i}|\,1_{\{N=i\}}.}
The Lebesgue monotone convergence theorem implies that
E
[
|
S
N
|
]
=
∑
i
=
1
∞
E
[
|
S
i
|
1
{
N
=
i
}
]
.
{\displaystyle \operatorname {E} [|S_{N}|]=\sum _{i=1}^{\infty }\operatorname {E} [|S_{i}|\,1_{\{N=i\}}].}
By the triangle inequality,
|
S
i
|
≤
∑
n
=
1
i
|
X
n
|
,
i
∈
N
.
{\displaystyle |S_{i}|\leq \sum _{n=1}^{i}|X_{n}|,\quad i\in {\mathbb {N} }.}
Using this upper estimate and changing the order of summation (which is permitted because all terms are non-negative), we obtain
where the second inequality follows using the monotone convergence theorem. By assumption (3), the infinite sequence on the right-hand side of (15) converges, hence SN is integrable.
=== Step 2: Integrability of the random sum TN ===
We now show that the random sum TN is integrable. Define the partial sums
of real numbers. Since N takes its values in
N
{\displaystyle \mathbb {N} }
0 and since T0 = 0, it follows that
|
T
N
|
=
∑
i
=
1
∞
|
T
i
|
1
{
N
=
i
}
.
{\displaystyle |T_{N}|=\sum _{i=1}^{\infty }|T_{i}|\,1_{\{N=i\}}.}
As in step 1, the Lebesgue monotone convergence theorem implies that
E
[
|
T
N
|
]
=
∑
i
=
1
∞
|
T
i
|
P
(
N
=
i
)
.
{\displaystyle \operatorname {E} [|T_{N}|]=\sum _{i=1}^{\infty }|T_{i}|\operatorname {P} (N=i).}
By the triangle inequality,
|
T
i
|
≤
∑
n
=
1
i
|
E
[
X
n
]
|
,
i
∈
N
.
{\displaystyle |T_{i}|\leq \sum _{n=1}^{i}{\bigl |}\!\operatorname {E} [X_{n}]{\bigr |},\quad i\in {\mathbb {N} }.}
Using this upper estimate and changing the order of summation (which is permitted because all terms are non-negative), we obtain
By assumption (2),
|
E
[
X
n
]
|
P
(
N
≥
n
)
=
|
E
[
X
n
1
{
N
≥
n
}
]
|
≤
E
[
|
X
n
|
1
{
N
≥
n
}
]
,
n
∈
N
.
{\displaystyle {\bigl |}\!\operatorname {E} [X_{n}]{\bigr |}\operatorname {P} (N\geq n)={\bigl |}\!\operatorname {E} [X_{n}1_{\{N\geq n\}}]{\bigr |}\leq \operatorname {E} [|X_{n}|1_{\{N\geq n\}}],\quad n\in {\mathbb {N} }.}
Substituting this into (17) yields
E
[
|
T
N
|
]
≤
∑
n
=
1
∞
E
[
|
X
n
|
1
{
N
≥
n
}
]
,
{\displaystyle \operatorname {E} [|T_{N}|]\leq \sum _{n=1}^{\infty }\operatorname {E} [|X_{n}|1_{\{N\geq n\}}],}
which is finite by assumption (3), hence TN is integrable.
=== Step 3: Proof of the identity ===
To prove Wald's equation, we essentially go through the same steps again without the absolute value, making use of the integrability of the random sums SN and TN in order to show that they have the same expectation.
Using the dominated convergence theorem with dominating random variable |SN| and the definition of the partial sum Si given in (14), it follows that
E
[
S
N
]
=
∑
i
=
1
∞
E
[
S
i
1
{
N
=
i
}
]
=
∑
i
=
1
∞
∑
n
=
1
i
E
[
X
n
1
{
N
=
i
}
]
.
{\displaystyle \operatorname {E} [S_{N}]=\sum _{i=1}^{\infty }\operatorname {E} [S_{i}1_{\{N=i\}}]=\sum _{i=1}^{\infty }\sum _{n=1}^{i}\operatorname {E} [X_{n}1_{\{N=i\}}].}
Due to the absolute convergence proved in (15) above using assumption (3), we may rearrange the summation and obtain that
E
[
S
N
]
=
∑
n
=
1
∞
∑
i
=
n
∞
E
[
X
n
1
{
N
=
i
}
]
=
∑
n
=
1
∞
E
[
X
n
1
{
N
≥
n
}
]
,
{\displaystyle \operatorname {E} [S_{N}]=\sum _{n=1}^{\infty }\sum _{i=n}^{\infty }\operatorname {E} [X_{n}1_{\{N=i\}}]=\sum _{n=1}^{\infty }\operatorname {E} [X_{n}1_{\{N\geq n\}}],}
where we used assumption (1) and the dominated convergence theorem with dominating random variable |Xn| for the second equality. Due to assumption (2) and the σ-additivity of the probability measure,
E
[
X
n
1
{
N
≥
n
}
]
=
E
[
X
n
]
P
(
N
≥
n
)
=
E
[
X
n
]
∑
i
=
n
∞
P
(
N
=
i
)
=
∑
i
=
n
∞
E
[
E
[
X
n
]
1
{
N
=
i
}
]
.
{\displaystyle {\begin{aligned}\operatorname {E} [X_{n}1_{\{N\geq n\}}]&=\operatorname {E} [X_{n}]\operatorname {P} (N\geq n)\\&=\operatorname {E} [X_{n}]\sum _{i=n}^{\infty }\operatorname {P} (N=i)=\sum _{i=n}^{\infty }\operatorname {E} \!{\bigl [}\operatorname {E} [X_{n}]1_{\{N=i\}}{\bigr ]}.\end{aligned}}}
Substituting this result into the previous equation, rearranging the summation (which is permitted due to absolute convergence, see (15) above), using linearity of expectation and the definition of the partial sum Ti of expectations given in (16),
E
[
S
N
]
=
∑
i
=
1
∞
∑
n
=
1
i
E
[
E
[
X
n
]
1
{
N
=
i
}
]
=
∑
i
=
1
∞
E
[
T
i
1
{
N
=
i
}
⏟
=
T
N
1
{
N
=
i
}
]
.
{\displaystyle \operatorname {E} [S_{N}]=\sum _{i=1}^{\infty }\sum _{n=1}^{i}\operatorname {E} \!{\bigl [}\operatorname {E} [X_{n}]1_{\{N=i\}}{\bigr ]}=\sum _{i=1}^{\infty }\operatorname {E} [\underbrace {T_{i}1_{\{N=i\}}} _{=\,T_{N}1_{\{N=i\}}}].}
By using dominated convergence again with dominating random variable |TN|,
E
[
S
N
]
=
E
[
T
N
∑
i
=
1
∞
1
{
N
=
i
}
⏟
=
1
{
N
≥
1
}
]
=
E
[
T
N
]
.
{\displaystyle \operatorname {E} [S_{N}]=\operatorname {E} \!{\biggl [}T_{N}\underbrace {\sum _{i=1}^{\infty }1_{\{N=i\}}} _{=\,1_{\{N\geq 1\}}}{\biggr ]}=\operatorname {E} [T_{N}].}
If assumptions (4) and (5) are satisfied, then by linearity of expectation,
E
[
T
N
]
=
E
[
∑
n
=
1
N
E
[
X
n
]
]
=
E
[
X
1
]
E
[
∑
n
=
1
N
1
⏟
=
N
]
=
E
[
N
]
E
[
X
1
]
.
{\displaystyle \operatorname {E} [T_{N}]=\operatorname {E} \!{\biggl [}\sum _{n=1}^{N}\operatorname {E} [X_{n}]{\biggr ]}=\operatorname {E} [X_{1}]\operatorname {E} \!{\biggl [}\underbrace {\sum _{n=1}^{N}1} _{=\,N}{\biggr ]}=\operatorname {E} [N]\operatorname {E} [X_{1}].}
This completes the proof.
== Further generalizations ==
Wald's equation can be transferred to Rd-valued random variables (Xn)n∈
N
{\displaystyle \mathbb {N} }
by applying the one-dimensional version to every component.
If (Xn)n∈
N
{\displaystyle \mathbb {N} }
are Bochner-integrable random variables taking values in a Banach space, then the general proof above can be adjusted accordingly.
== See also ==
Lorden's inequality
Wald's martingale
Spitzer's formula
== Notes ==
== References ==
Wald, Abraham (September 1944). "On cumulative sums of random variables". The Annals of Mathematical Statistics. 15 (3): 283–296. doi:10.1214/aoms/1177731235. JSTOR 2236250. MR 0010927. Zbl 0063.08122.
Wald, Abraham (1945). "Some generalizations of the theory of cumulative sums of random variables". The Annals of Mathematical Statistics. 16 (3): 287–293. doi:10.1214/aoms/1177731092. JSTOR 2235707. MR 0013852. Zbl 0063.08129.
Blackwell, D.; Girshick, M. A. (1946). "On functions of sequences of independent chance vectors with applications to the problem of the 'random walk' in k dimensions". Ann. Math. Statist. 17 (3): 310–317. doi:10.1214/aoms/1177730943.
Chan, Hock Peng; Fuh, Cheng-Der; Hu, Inchi (2006). "Multi-armed bandit problem with precedence relations". Time Series and Related Topics. Institute of Mathematical Statistics Lecture Notes - Monograph Series. Vol. 52. pp. 223–235. arXiv:math/0702819. doi:10.1214/074921706000001067. ISBN 978-0-940600-68-3. S2CID 18813099.
== External links ==
"Wald identity", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Wald's_equation |
Inverse transform sampling (also known as inversion sampling, the inverse probability integral transform, the inverse transformation method, or the Smirnov transform) is a basic method for pseudo-random number sampling, i.e., for generating sample numbers at random from any probability distribution given its cumulative distribution function.
Inverse transformation sampling takes uniform samples of a number
u
{\displaystyle u}
between 0 and 1, interpreted as a probability, and then returns the smallest number
x
∈
R
{\displaystyle x\in \mathbb {R} }
such that
F
(
x
)
≥
u
{\displaystyle F(x)\geq u}
for the cumulative distribution function
F
{\displaystyle F}
of a random variable. For example, imagine that
F
{\displaystyle F}
is the standard normal distribution with mean zero and standard deviation one. The table below shows samples taken from the uniform distribution and their representation on the standard normal distribution.
We are randomly choosing a proportion of the area under the curve and returning the number in the domain such that exactly this proportion of the area occurs to the left of that number. Intuitively, we are unlikely to choose a number in the far end of tails because there is very little area in them which would require choosing a number very close to zero or one.
Computationally, this method involves computing the quantile function of the distribution — in other words, computing the cumulative distribution function (CDF) of the distribution (which maps a number in the domain to a probability between 0 and 1) and then inverting that function. This is the source of the term "inverse" or "inversion" in most of the names for this method. Note that for a discrete distribution, computing the CDF is not in general too difficult: we simply add up the individual probabilities for the various points of the distribution. For a continuous distribution, however, we need to integrate the probability density function (PDF) of the distribution, which is impossible to do analytically for most distributions (including the normal distribution). As a result, this method may be computationally inefficient for many distributions and other methods are preferred; however, it is a useful method for building more generally applicable samplers such as those based on rejection sampling.
For the normal distribution, the lack of an analytical expression for the corresponding quantile function means that other methods (e.g. the Box–Muller transform) may be preferred computationally. It is often the case that, even for simple distributions, the inverse transform sampling method can be improved on: see, for example, the ziggurat algorithm and rejection sampling. On the other hand, it is possible to approximate the quantile function of the normal distribution extremely accurately using moderate-degree polynomials, and in fact the method of doing this is fast enough that inversion sampling is now the default method for sampling from a normal distribution in the statistical package R.
== Formal statement ==
For any random variable
X
∈
R
{\displaystyle X\in \mathbb {R} }
, the random variable
F
X
−
1
(
U
)
{\displaystyle F_{X}^{-1}(U)}
has the same distribution as
X
{\displaystyle X}
, where
F
X
−
1
{\displaystyle F_{X}^{-1}}
is the generalized inverse of the cumulative distribution function
F
X
{\displaystyle F_{X}}
of
X
{\displaystyle X}
and
U
{\displaystyle U}
is uniform on
[
0
,
1
]
{\displaystyle [0,1]}
.
For continuous random variables, the inverse probability integral transform is indeed the inverse of the probability integral transform, which states that for a continuous random variable
X
{\displaystyle X}
with cumulative distribution function
F
X
{\displaystyle F_{X}}
, the random variable
U
=
F
X
(
X
)
{\displaystyle U=F_{X}(X)}
is uniform on
[
0
,
1
]
{\displaystyle [0,1]}
.
== Intuition ==
From
U
∼
U
n
i
f
[
0
,
1
]
{\displaystyle U\sim \mathrm {Unif} [0,1]}
, we want to generate
X
{\displaystyle X}
with CDF
F
X
(
x
)
.
{\displaystyle F_{X}(x).}
We assume
F
X
(
x
)
{\displaystyle F_{X}(x)}
to be a continuous, strictly increasing function, which provides good intuition.
We want to see if we can find some strictly monotone transformation
T
:
[
0
,
1
]
↦
R
{\displaystyle T:[0,1]\mapsto \mathbb {R} }
, such that
T
(
U
)
=
d
X
{\displaystyle T(U){\overset {d}{=}}X}
. We will have
F
X
(
x
)
=
Pr
(
X
≤
x
)
=
Pr
(
T
(
U
)
≤
x
)
=
Pr
(
U
≤
T
−
1
(
x
)
)
=
T
−
1
(
x
)
,
for
x
∈
R
,
{\displaystyle F_{X}(x)=\Pr(X\leq x)=\Pr(T(U)\leq x)=\Pr(U\leq T^{-1}(x))=T^{-1}(x),{\text{ for }}x\in \mathbb {R} ,}
where the last step used that
Pr
(
U
≤
y
)
=
y
{\displaystyle \Pr(U\leq y)=y}
when
U
{\displaystyle U}
is uniform on
[
0
,
1
]
{\displaystyle [0,1]}
.
So we got
F
X
{\displaystyle F_{X}}
to be the inverse function of
T
{\displaystyle T}
, or, equivalently
T
(
u
)
=
F
X
−
1
(
u
)
,
u
∈
[
0
,
1
]
.
{\displaystyle T(u)=F_{X}^{-1}(u),u\in [0,1].}
Therefore, we can generate
X
{\displaystyle X}
from
F
X
−
1
(
U
)
.
{\displaystyle F_{X}^{-1}(U).}
== The method ==
The problem that the inverse transform sampling method solves is as follows:
Let
X
{\displaystyle X}
be a random variable whose distribution can be described by the cumulative distribution function
F
X
{\displaystyle F_{X}}
.
We want to generate values of
X
{\displaystyle X}
which are distributed according to this distribution.
The inverse transform sampling method works as follows:
Generate a random number
u
{\displaystyle u}
from the standard uniform distribution in the interval
[
0
,
1
]
{\displaystyle [0,1]}
, i.e. from
U
∼
U
n
i
f
[
0
,
1
]
.
{\displaystyle U\sim \mathrm {Unif} [0,1].}
Find the generalized inverse of the desired CDF, i.e.
F
X
−
1
(
u
)
{\displaystyle F_{X}^{-1}(u)}
.
Compute
X
′
(
u
)
=
F
X
−
1
(
u
)
{\displaystyle X'(u)=F_{X}^{-1}(u)}
. The computed random variable
X
′
(
U
)
{\displaystyle X'(U)}
has distribution
F
X
{\displaystyle F_{X}}
and thereby the same law as
X
{\displaystyle X}
.
Expressed differently, given a cumulative distribution function
F
X
{\displaystyle F_{X}}
and a uniform variable
U
∈
[
0
,
1
]
{\displaystyle U\in [0,1]}
, the random variable
X
=
F
X
−
1
(
U
)
{\displaystyle X=F_{X}^{-1}(U)}
has the distribution
F
X
{\displaystyle F_{X}}
.
In the continuous case, a treatment of such inverse functions as objects satisfying differential equations can be given. Some such differential equations admit explicit power series solutions, despite their non-linearity.
== Examples ==
As an example, suppose we have a random variable
U
∼
U
n
i
f
(
0
,
1
)
{\displaystyle U\sim \mathrm {Unif} (0,1)}
and a cumulative distribution function
F
(
x
)
=
1
−
exp
(
−
x
)
{\displaystyle {\begin{aligned}F(x)=1-\exp(-{\sqrt {x}})\end{aligned}}}
In order to perform an inversion we want to solve for
F
(
F
−
1
(
u
)
)
=
u
{\displaystyle F(F^{-1}(u))=u}
F
(
F
−
1
(
u
)
)
=
u
1
−
exp
(
−
F
−
1
(
u
)
)
=
u
F
−
1
(
u
)
=
(
−
log
(
1
−
u
)
)
2
=
(
log
(
1
−
u
)
)
2
{\displaystyle {\begin{aligned}F(F^{-1}(u))&=u\\1-\exp \left(-{\sqrt {F^{-1}(u)}}\right)&=u\\F^{-1}(u)&=(-\log(1-u))^{2}\\&=(\log(1-u))^{2}\end{aligned}}}
From here we would perform steps one, two and three.
As another example, we use the exponential distribution with
F
X
(
x
)
=
1
−
e
−
λ
x
{\displaystyle F_{X}(x)=1-e^{-\lambda x}}
for x ≥ 0 (and 0 otherwise). By solving y=F(x) we obtain the inverse function
x
=
F
−
1
(
y
)
=
−
1
λ
ln
(
1
−
y
)
.
{\displaystyle x=F^{-1}(y)=-{\frac {1}{\lambda }}\ln(1-y).}
It means that if we draw some
y
0
{\displaystyle y_{0}}
from a
U
∼
U
n
i
f
(
0
,
1
)
{\displaystyle U\sim \mathrm {Unif} (0,1)}
and compute
x
0
=
F
X
−
1
(
y
0
)
=
−
1
λ
ln
(
1
−
y
0
)
,
{\displaystyle x_{0}=F_{X}^{-1}(y_{0})=-{\frac {1}{\lambda }}\ln(1-y_{0}),}
This
x
0
{\displaystyle x_{0}}
has exponential distribution.
The idea is illustrated in the following graph:
Note that the distribution does not change if we start with 1-y instead of y. For computational purposes, it therefore suffices to generate random numbers y in [0, 1] and then simply calculate
x
=
F
−
1
(
y
)
=
−
1
λ
ln
(
y
)
.
{\displaystyle x=F^{-1}(y)=-{\frac {1}{\lambda }}\ln(y).}
== Proof of correctness ==
Let
F
{\displaystyle F}
be a cumulative distribution function, and let
F
−
1
{\displaystyle F^{-1}}
be its generalized inverse function (using the infimum because CDFs are weakly monotonic and right-continuous):
F
−
1
(
u
)
=
inf
{
x
∣
F
(
x
)
≥
u
}
(
0
<
u
<
1
)
.
{\displaystyle F^{-1}(u)=\inf \;\{x\mid F(x)\geq u\}\qquad (0<u<1).}
Claim: If
U
{\displaystyle U}
is a uniform random variable on
[
0
,
1
]
{\displaystyle [0,1]}
then
F
−
1
(
U
)
{\displaystyle F^{-1}(U)}
has
F
{\displaystyle F}
as its CDF.
Proof:
Pr
(
F
−
1
(
U
)
≤
x
)
=
Pr
(
U
≤
F
(
x
)
)
(
F
is right-continuous, so
{
u
:
F
−
1
(
u
)
≤
x
}
=
{
u
:
u
≤
F
(
x
)
}
)
=
F
(
x
)
(
because
Pr
(
U
≤
u
)
=
u
,
when
U
is uniform on
[
0
,
1
]
)
{\displaystyle {\begin{aligned}&\Pr(F^{-1}(U)\leq x)\\&{}=\Pr(U\leq F(x))\quad &(F{\text{ is right-continuous, so }}\{u:F^{-1}(u)\leq x\}=\{u:u\leq F(x)\})\\&{}=F(x)\quad &({\text{because }}\Pr(U\leq u)=u,{\text{ when }}U{\text{ is uniform on }}[0,1])\\\end{aligned}}}
== Truncated distribution ==
Inverse transform sampling can be simply extended to cases of truncated distributions on the interval
(
a
,
b
]
{\displaystyle (a,b]}
without the cost of rejection sampling: the same algorithm can be followed, but instead of generating a random number
u
{\displaystyle u}
uniformly distributed between 0 and 1, generate
u
{\displaystyle u}
uniformly distributed between
F
(
a
)
{\displaystyle F(a)}
and
F
(
b
)
{\displaystyle F(b)}
, and then again take
F
−
1
(
u
)
{\displaystyle F^{-1}(u)}
.
== Reduction of the number of inversions ==
In order to obtain a large number of samples, one needs to perform the same number of inversions of the distribution.
One possible way to reduce the number of inversions while obtaining a large number of samples is the application of the so-called Stochastic Collocation Monte Carlo sampler (SCMC sampler) within a polynomial chaos expansion framework. This allows us to generate any number of Monte Carlo samples with only a few inversions of the original distribution with independent samples of a variable for which the inversions are analytically available, for example the standard normal variable.
== Software implementations ==
There are software implementations available for applying the inverse sampling method by using numerical approximations of the inverse in the case that it is not available in closed form. For example, an approximation of the inverse can be computed if the user provides some information about the distributions such as the PDF or the CDF.
C library UNU.RAN
R library Runuran
Python subpackage sampling in scipy.stats
== See also ==
Probability integral transform
Copula, defined by means of probability integral transform.
Quantile function, for the explicit construction of inverse CDFs.
Inverse distribution function for a precise mathematical definition for distributions with discrete components.
Rejection sampling is another common technique to generate random variates that does not rely on inversion of the CDF.
== References == | Wikipedia/Inverse_transform_sampling_method |
In mathematics, Varadhan's lemma is a result from the large deviations theory named after S. R. Srinivasa Varadhan. The result gives information on the asymptotic distribution of a statistic φ(Zε) of a family of random variables Zε as ε becomes small in terms of a rate function for the variables.
== Statement of the lemma ==
Let X be a regular topological space; let (Zε)ε>0 be a family of random variables taking values in X; let με be the law (probability measure) of Zε. Suppose that (με)ε>0 satisfies the large deviation principle with good rate function I : X → [0, +∞]. Let ϕ : X → R be any continuous function. Suppose that at least one of the following two conditions holds true: either the tail condition
lim
M
→
∞
lim sup
ε
→
0
(
ε
log
E
[
exp
(
ϕ
(
Z
ε
)
/
ε
)
1
(
ϕ
(
Z
ε
)
≥
M
)
]
)
=
−
∞
,
{\displaystyle \lim _{M\to \infty }\limsup _{\varepsilon \to 0}{\big (}\varepsilon \log \mathbf {E} {\big [}\exp {\big (}\phi (Z_{\varepsilon })/\varepsilon {\big )}\,\mathbf {1} {\big (}\phi (Z_{\varepsilon })\geq M{\big )}{\big ]}{\big )}=-\infty ,}
where 1(E) denotes the indicator function of the event E; or, for some γ > 1, the moment condition
lim sup
ε
→
0
(
ε
log
E
[
exp
(
γ
ϕ
(
Z
ε
)
/
ε
)
]
)
<
∞
.
{\displaystyle \limsup _{\varepsilon \to 0}{\big (}\varepsilon \log \mathbf {E} {\big [}\exp {\big (}\gamma \phi (Z_{\varepsilon })/\varepsilon {\big )}{\big ]}{\big )}<\infty .}
Then
lim
ε
→
0
ε
log
E
[
exp
(
ϕ
(
Z
ε
)
/
ε
)
]
=
sup
x
∈
X
(
ϕ
(
x
)
−
I
(
x
)
)
.
{\displaystyle \lim _{\varepsilon \to 0}\varepsilon \log \mathbf {E} {\big [}\exp {\big (}\phi (Z_{\varepsilon })/\varepsilon {\big )}{\big ]}=\sup _{x\in X}{\big (}\phi (x)-I(x){\big )}.}
== See also ==
Laplace principle (large deviations theory)
== References ==
Dembo, Amir; Zeitouni, Ofer (1998). Large deviations techniques and applications. Applications of Mathematics (New York) 38 (Second ed.). New York: Springer-Verlag. pp. xvi+396. ISBN 0-387-98406-2. MR 1619036. (See theorem 4.3.1) | Wikipedia/Varadhan's_lemma |
In probability theory, a product-form solution is a particularly efficient form of solution for determining some metric of a system with distinct sub-components, where the metric for the collection of components can be written as a product of the metric across the different components. Using capital Pi notation a product-form solution has algebraic form
P
(
x
1
,
x
2
,
x
3
,
…
,
x
n
)
=
B
∏
i
=
1
n
P
(
x
i
)
{\displaystyle {\text{P}}(x_{1},x_{2},x_{3},\ldots ,x_{n})=B\prod _{i=1}^{n}{\text{P}}(x_{i})}
where B is some constant. Solutions of this form are of interest as they are computationally inexpensive to evaluate for large values of n. Such solutions in queueing networks are important for finding performance metrics in models of multiprogrammed and time-shared computer systems.
== Equilibrium distributions ==
The first product-form solutions were found for equilibrium distributions of Markov chains. Trivially, models composed of two or more independent sub-components exhibit a product-form solution by the definition of independence. Initially the term was used in queueing networks where the sub-components would be individual queues. For example, Jackson's theorem gives the joint equilibrium distribution of an open queueing network as the product of the equilibrium distributions of the individual queues. After numerous extensions, chiefly the BCMP network it was thought local balance was a requirement for a product-form solution.
Gelenbe's G-network model was the first to show that this is not the case. Motivated by the need to model biological neurons which have a point-process like spiking behaviour, he introduced the precursor of G-Networks, calling it the random neural network. By introducing "negative customers" which can destroy or eliminate other customers, he generalised the family of product form networks. Then this was further extended in several steps, first by Gelenbe's "triggers" which are customers which have the power of moving other customers from some queue to another. Another new form of customer that also led to product form was Gelenbe's "batch removal". This was further extended by Erol Gelenbe and Jean-Michel Fourneau with customer types called "resets" which can model the repair of failures: when a queue hits the empty state, representing (for instance) a failure, the queue length can jump back or be "reset" to its steady-state distribution by an arriving reset customer, representing a repair. All these previous types of customers in G-Networks can exist in the same network, including with multiple classes, and they all together still result in the product form solution, taking us far beyond the reversible networks that had been considered before.
Product-form solutions are sometimes described as "stations are independent in equilibrium". Product form solutions also exist in networks of bulk queues.
J.M. Harrison and R.J. Williams note that "virtually all of the models that have been successfully analyzed in classical queueing network theory are models having a so-called product-form stationary distribution" More recently, product-form solutions have been published for Markov process algebras (e.g. RCAT in PEPA) and stochastic petri nets. Martin Feinberg's deficiency zero theorem gives a sufficient condition for chemical reaction networks to exhibit a product-form stationary distribution.
The work by Gelenbe also shows that product form G-Networks can be used to model spiking random neural networks, and furthermore that such networks can be used to approximate bounded and continuous real-valued functions.
== Sojourn time distributions ==
The term product form has also been used to refer to the sojourn time distribution in a cyclic queueing system, where the time spent by jobs at M nodes is given as the product of time spent at each node. In 1957 Reich showed the result for two M/M/1 queues in tandem, later extending this to n M/M/1 queues in tandem and it has been shown to apply to overtake–free paths in Jackson networks. Walrand and Varaiya suggest that non-overtaking (where customers cannot overtake other customers by taking a different route through the network) may be a necessary condition for the result to hold. Mitrani offers exact solutions to some simple networks with overtaking, showing that none of these exhibit product-form sojourn time distributions.
For closed networks, Chow showed a result to hold for two service nodes, which was later generalised to a cycle of queues and to overtake–free paths in Gordon–Newell networks.
== Extensions ==
Approximate product-form solutions are computed assuming independent marginal distributions, which can give a good approximation to the stationary distribution under some conditions.
Semi-product-form solutions are solutions where a distribution can be written as a product where terms have a limited functional dependency on the global state space, which can be approximated.
Quasi-product-form solutions are either
solutions which are not the product of marginal densities, but the marginal densities describe the distribution in a product-type manner or
approximate form for transient probability distributions which allows transient moments to be approximated.
== References == | Wikipedia/Product-form_solution |
In probability theory, it is possible to approximate the moments of a function f of a random variable X using Taylor expansions, provided that f is sufficiently differentiable and that the moments of X are finite.
A simulation-based alternative to this approximation is the application of Monte Carlo simulations.
== First moment ==
Given
μ
X
{\displaystyle \mu _{X}}
and
σ
X
2
{\displaystyle \sigma _{X}^{2}}
, the mean and the variance of
X
{\displaystyle X}
, respectively, a Taylor expansion of the expected value of
f
(
X
)
{\displaystyle f(X)}
can be found via
E
[
f
(
X
)
]
=
E
[
f
(
μ
X
+
(
X
−
μ
X
)
)
]
≈
E
[
f
(
μ
X
)
+
f
′
(
μ
X
)
(
X
−
μ
X
)
+
1
2
f
″
(
μ
X
)
(
X
−
μ
X
)
2
]
=
f
(
μ
X
)
+
f
′
(
μ
X
)
E
[
X
−
μ
X
]
+
1
2
f
″
(
μ
X
)
E
[
(
X
−
μ
X
)
2
]
.
{\displaystyle {\begin{aligned}\operatorname {E} \left[f(X)\right]&{}=\operatorname {E} \left[f\left(\mu _{X}+\left(X-\mu _{X}\right)\right)\right]\\&{}\approx \operatorname {E} \left[f(\mu _{X})+f'(\mu _{X})\left(X-\mu _{X}\right)+{\frac {1}{2}}f''(\mu _{X})\left(X-\mu _{X}\right)^{2}\right]\\&{}=f(\mu _{X})+f'(\mu _{X})\operatorname {E} \left[X-\mu _{X}\right]+{\frac {1}{2}}f''(\mu _{X})\operatorname {E} \left[\left(X-\mu _{X}\right)^{2}\right].\end{aligned}}}
Since
E
[
X
−
μ
X
]
=
0
,
{\displaystyle E[X-\mu _{X}]=0,}
the second term vanishes. Also,
E
[
(
X
−
μ
X
)
2
]
{\displaystyle E[(X-\mu _{X})^{2}]}
is
σ
X
2
{\displaystyle \sigma _{X}^{2}}
. Therefore,
E
[
f
(
X
)
]
≈
f
(
μ
X
)
+
f
″
(
μ
X
)
2
σ
X
2
{\displaystyle \operatorname {E} \left[f(X)\right]\approx f(\mu _{X})+{\frac {f''(\mu _{X})}{2}}\sigma _{X}^{2}}
.
It is possible to generalize this to functions of more than one variable using multivariate Taylor expansions. For example,
E
[
X
Y
]
≈
E
[
X
]
E
[
Y
]
−
cov
[
X
,
Y
]
E
[
Y
]
2
+
E
[
X
]
E
[
Y
]
3
var
[
Y
]
{\displaystyle \operatorname {E} \left[{\frac {X}{Y}}\right]\approx {\frac {\operatorname {E} \left[X\right]}{\operatorname {E} \left[Y\right]}}-{\frac {\operatorname {cov} \left[X,Y\right]}{\operatorname {E} \left[Y\right]^{2}}}+{\frac {\operatorname {E} \left[X\right]}{\operatorname {E} \left[Y\right]^{3}}}\operatorname {var} \left[Y\right]}
== Second moment ==
Similarly,
var
[
f
(
X
)
]
≈
(
f
′
(
E
[
X
]
)
)
2
var
[
X
]
=
(
f
′
(
μ
X
)
)
2
σ
X
2
−
1
4
(
f
″
(
μ
X
)
)
2
σ
X
4
{\displaystyle \operatorname {var} \left[f(X)\right]\approx \left(f'(\operatorname {E} \left[X\right])\right)^{2}\operatorname {var} \left[X\right]=\left(f'(\mu _{X})\right)^{2}\sigma _{X}^{2}-{\frac {1}{4}}\left(f''(\mu _{X})\right)^{2}\sigma _{X}^{4}}
The above is obtained using a second order approximation, following the method used in estimating the first moment. It will be a poor approximation in cases where
f
(
X
)
{\displaystyle f(X)}
is highly non-linear. This is a special case of the delta method.
Indeed, we take
E
[
f
(
X
)
]
≈
f
(
μ
X
)
+
f
″
(
μ
X
)
2
σ
X
2
{\displaystyle \operatorname {E} \left[f(X)\right]\approx f(\mu _{X})+{\frac {f''(\mu _{X})}{2}}\sigma _{X}^{2}}
.
With
f
(
X
)
=
g
(
X
)
2
{\displaystyle f(X)=g(X)^{2}}
, we get
E
[
Y
2
]
{\displaystyle \operatorname {E} \left[Y^{2}\right]}
. The variance is then computed using the formula
var
[
Y
]
=
E
[
Y
2
]
−
μ
Y
2
{\displaystyle \operatorname {var} \left[Y\right]=\operatorname {E} \left[Y^{2}\right]-\mu _{Y}^{2}}
.
An example is,
var
[
X
Y
]
≈
var
[
X
]
E
[
Y
]
2
−
2
E
[
X
]
E
[
Y
]
3
cov
[
X
,
Y
]
+
E
[
X
]
2
E
[
Y
]
4
var
[
Y
]
.
{\displaystyle \operatorname {var} \left[{\frac {X}{Y}}\right]\approx {\frac {\operatorname {var} \left[X\right]}{\operatorname {E} \left[Y\right]^{2}}}-{\frac {2\operatorname {E} \left[X\right]}{\operatorname {E} \left[Y\right]^{3}}}\operatorname {cov} \left[X,Y\right]+{\frac {\operatorname {E} \left[X\right]^{2}}{\operatorname {E} \left[Y\right]^{4}}}\operatorname {var} \left[Y\right].}
The second order approximation, when X follows a normal distribution, is:
var
[
f
(
X
)
]
≈
(
f
′
(
E
[
X
]
)
)
2
var
[
X
]
+
(
f
″
(
E
[
X
]
)
)
2
2
(
var
[
X
]
)
2
=
(
f
′
(
μ
X
)
)
2
σ
X
2
+
1
2
(
f
″
(
μ
X
)
)
2
σ
X
4
+
(
f
′
(
μ
X
)
)
(
f
‴
(
μ
X
)
)
σ
X
4
{\displaystyle \operatorname {var} \left[f(X)\right]\approx \left(f'(\operatorname {E} \left[X\right])\right)^{2}\operatorname {var} \left[X\right]+{\frac {\left(f''(\operatorname {E} \left[X\right])\right)^{2}}{2}}\left(\operatorname {var} \left[X\right]\right)^{2}=\left(f'(\mu _{X})\right)^{2}\sigma _{X}^{2}+{\frac {1}{2}}\left(f''(\mu _{X})\right)^{2}\sigma _{X}^{4}+\left(f'(\mu _{X})\right)\left(f'''(\mu _{X})\right)\sigma _{X}^{4}}
== First product moment ==
To find a second-order approximation for the covariance of functions of two random variables (with the same function applied to both), one can proceed as follows. First, note that
cov
[
f
(
X
)
,
f
(
Y
)
]
=
E
[
f
(
X
)
f
(
Y
)
]
−
E
[
f
(
X
)
]
E
[
f
(
Y
)
]
{\displaystyle \operatorname {cov} \left[f(X),f(Y)\right]=\operatorname {E} \left[f(X)f(Y)\right]-\operatorname {E} \left[f(X)\right]\operatorname {E} \left[f(Y)\right]}
. Since a second-order expansion for
E
[
f
(
X
)
]
{\displaystyle \operatorname {E} \left[f(X)\right]}
has already been derived above, it only remains to find
E
[
f
(
X
)
f
(
Y
)
]
{\displaystyle \operatorname {E} \left[f(X)f(Y)\right]}
. Treating
f
(
X
)
f
(
Y
)
{\displaystyle f(X)f(Y)}
as a two-variable function, the second-order Taylor expansion is as follows:
f
(
X
)
f
(
Y
)
≈
f
(
μ
X
)
f
(
μ
Y
)
+
(
X
−
μ
X
)
f
′
(
μ
X
)
f
(
μ
Y
)
+
(
Y
−
μ
Y
)
f
(
μ
X
)
f
′
(
μ
Y
)
+
1
2
[
(
X
−
μ
X
)
2
f
″
(
μ
X
)
f
(
μ
Y
)
+
2
(
X
−
μ
X
)
(
Y
−
μ
Y
)
f
′
(
μ
X
)
f
′
(
μ
Y
)
+
(
Y
−
μ
Y
)
2
f
(
μ
X
)
f
″
(
μ
Y
)
]
{\displaystyle {\begin{aligned}f(X)f(Y)&{}\approx f(\mu _{X})f(\mu _{Y})+(X-\mu _{X})f'(\mu _{X})f(\mu _{Y})+(Y-\mu _{Y})f(\mu _{X})f'(\mu _{Y})+{\frac {1}{2}}\left[(X-\mu _{X})^{2}f''(\mu _{X})f(\mu _{Y})+2(X-\mu _{X})(Y-\mu _{Y})f'(\mu _{X})f'(\mu _{Y})+(Y-\mu _{Y})^{2}f(\mu _{X})f''(\mu _{Y})\right]\end{aligned}}}
Taking expectation of the above and simplifying—making use of the identities
E
(
X
2
)
=
var
(
X
)
+
[
E
(
X
)
]
2
{\displaystyle \operatorname {E} (X^{2})=\operatorname {var} (X)+\left[\operatorname {E} (X)\right]^{2}}
and
E
(
X
Y
)
=
cov
(
X
,
Y
)
+
[
E
(
X
)
]
[
E
(
Y
)
]
{\displaystyle \operatorname {E} (XY)=\operatorname {cov} (X,Y)+\left[\operatorname {E} (X)\right]\left[\operatorname {E} (Y)\right]}
—leads to
E
[
f
(
X
)
f
(
Y
)
]
≈
f
(
μ
X
)
f
(
μ
Y
)
+
f
′
(
μ
X
)
f
′
(
μ
Y
)
cov
(
X
,
Y
)
+
1
2
f
″
(
μ
X
)
f
(
μ
Y
)
var
(
X
)
+
1
2
f
(
μ
X
)
f
″
(
μ
Y
)
var
(
Y
)
{\displaystyle \operatorname {E} \left[f(X)f(Y)\right]\approx f(\mu _{X})f(\mu _{Y})+f'(\mu _{X})f'(\mu _{Y})\operatorname {cov} (X,Y)+{\frac {1}{2}}f''(\mu _{X})f(\mu _{Y})\operatorname {var} (X)+{\frac {1}{2}}f(\mu _{X})f''(\mu _{Y})\operatorname {var} (Y)}
. Hence,
cov
[
f
(
X
)
,
f
(
Y
)
]
≈
f
(
μ
X
)
f
(
μ
Y
)
+
f
′
(
μ
X
)
f
′
(
μ
Y
)
cov
(
X
,
Y
)
+
1
2
f
″
(
μ
X
)
f
(
μ
Y
)
var
(
X
)
+
1
2
f
(
μ
X
)
f
″
(
μ
Y
)
var
(
Y
)
−
[
f
(
μ
X
)
+
1
2
f
″
(
μ
X
)
var
(
X
)
]
[
f
(
μ
Y
)
+
1
2
f
″
(
μ
Y
)
var
(
Y
)
]
=
f
′
(
μ
X
)
f
′
(
μ
Y
)
cov
(
X
,
Y
)
−
1
4
f
″
(
μ
X
)
f
″
(
μ
Y
)
var
(
X
)
var
(
Y
)
{\displaystyle {\begin{aligned}\operatorname {cov} \left[f(X),f(Y)\right]&{}\approx f(\mu _{X})f(\mu _{Y})+f'(\mu _{X})f'(\mu _{Y})\operatorname {cov} (X,Y)+{\frac {1}{2}}f''(\mu _{X})f(\mu _{Y})\operatorname {var} (X)+{\frac {1}{2}}f(\mu _{X})f''(\mu _{Y})\operatorname {var} (Y)-\left[f(\mu _{X})+{\frac {1}{2}}f''(\mu _{X})\operatorname {var} (X)\right]\left[f(\mu _{Y})+{\frac {1}{2}}f''(\mu _{Y})\operatorname {var} (Y)\right]\\&{}=f'(\mu _{X})f'(\mu _{Y})\operatorname {cov} (X,Y)-{\frac {1}{4}}f''(\mu _{X})f''(\mu _{Y})\operatorname {var} (X)\operatorname {var} (Y)\end{aligned}}}
== Random vectors ==
If X is a random vector, the approximations for the mean and variance of
f
(
X
)
{\displaystyle f(X)}
are given by
E
(
f
(
X
)
)
=
f
(
μ
X
)
+
1
2
trace
(
H
f
(
μ
X
)
Σ
X
)
var
(
f
(
X
)
)
=
∇
f
(
μ
X
)
t
Σ
X
∇
f
(
μ
X
)
+
1
2
trace
(
H
f
(
μ
X
)
Σ
X
H
f
(
μ
X
)
Σ
X
)
.
{\displaystyle {\begin{aligned}\operatorname {E} (f(X))&=f(\mu _{X})+{\frac {1}{2}}\operatorname {trace} (H_{f}(\mu _{X})\Sigma _{X})\\\operatorname {var} (f(X))&=\nabla f(\mu _{X})^{t}\Sigma _{X}\nabla f(\mu _{X})+{\frac {1}{2}}\operatorname {trace} \left(H_{f}(\mu _{X})\Sigma _{X}H_{f}(\mu _{X})\Sigma _{X}\right).\end{aligned}}}
Here
∇
f
{\displaystyle \nabla f}
and
H
f
{\displaystyle H_{f}}
denote the gradient and the Hessian matrix respectively, and
Σ
X
{\displaystyle \Sigma _{X}}
is the covariance matrix of X.
== See also ==
Propagation of uncertainty
WKB approximation
Delta method
== Notes ==
== Further reading ==
Wolter, Kirk M. (1985). "Taylor Series Methods". Introduction to Variance Estimation. New York: Springer. pp. 221–247. ISBN 0-387-96119-4. | Wikipedia/Taylor_expansions_for_the_moments_of_functions_of_random_variables |
In the domain of physics and probability, a Markov random field (MRF), Markov network or undirected graphical model is a set of random variables having a Markov property described by an undirected graph. In other words, a random field is said to be a Markov random field if it satisfies Markov properties. The concept originates from the Sherrington–Kirkpatrick model.
A Markov network or MRF is similar to a Bayesian network in its representation of dependencies; the differences being that Bayesian networks are directed and acyclic, whereas Markov networks are undirected and may be cyclic. Thus, a Markov network can represent certain dependencies that a Bayesian network cannot (such as cyclic dependencies ); on the other hand, it can't represent certain dependencies that a Bayesian network can (such as induced dependencies ). The underlying graph of a Markov random field may be finite or infinite.
When the joint probability density of the random variables is strictly positive, it is also referred to as a Gibbs random field, because, according to the Hammersley–Clifford theorem, it can then be represented by a Gibbs measure for an appropriate (locally defined) energy function. The prototypical Markov random field is the Ising model; indeed, the Markov random field was introduced as the general setting for the Ising model. In the domain of artificial intelligence, a Markov random field is used to model various low- to mid-level tasks in image processing and computer vision.
== Definition ==
Given an undirected graph
G
=
(
V
,
E
)
{\displaystyle G=(V,E)}
, a set of random variables
X
=
(
X
v
)
v
∈
V
{\displaystyle X=(X_{v})_{v\in V}}
indexed by
V
{\displaystyle V}
form a Markov random field with respect to
G
{\displaystyle G}
if they satisfy the local Markov properties:
Pairwise Markov property: Any two non-adjacent variables are conditionally independent given all other variables:
X
u
⊥
⊥
X
v
∣
X
V
∖
{
u
,
v
}
{\displaystyle X_{u}\perp \!\!\!\perp X_{v}\mid X_{V\smallsetminus \{u,v\}}}
Local Markov property: A variable is conditionally independent of all other variables given its neighbors:
X
v
⊥
⊥
X
V
∖
N
[
v
]
∣
X
N
(
v
)
{\displaystyle X_{v}\perp \!\!\!\perp X_{V\smallsetminus \operatorname {N} [v]}\mid X_{\operatorname {N} (v)}}
where
N
(
v
)
{\textstyle \operatorname {N} (v)}
is the set of neighbors of
v
{\displaystyle v}
, and
N
[
v
]
=
v
∪
N
(
v
)
{\displaystyle \operatorname {N} [v]=v\cup \operatorname {N} (v)}
is the closed neighbourhood of
v
{\displaystyle v}
.
Global Markov property: Any two subsets of variables are conditionally independent given a separating subset:
X
A
⊥
⊥
X
B
∣
X
S
{\displaystyle X_{A}\perp \!\!\!\perp X_{B}\mid X_{S}}
where every path from a node in
A
{\displaystyle A}
to a node in
B
{\displaystyle B}
passes through
S
{\displaystyle S}
.
The Global Markov property is stronger than the Local Markov property, which in turn is stronger than the Pairwise one. However, the above three Markov properties are equivalent for positive distributions (those that assign only nonzero probabilities to the associated variables).
The relation between the three Markov properties is particularly clear in the following formulation:
Pairwise: For any
i
,
j
∈
V
{\displaystyle i,j\in V}
not equal or adjacent,
X
i
⊥
⊥
X
j
|
X
V
∖
{
i
,
j
}
{\displaystyle X_{i}\perp \!\!\!\perp X_{j}|X_{V\smallsetminus \{i,j\}}}
.
Local: For any
i
∈
V
{\displaystyle i\in V}
and
J
⊂
V
{\displaystyle J\subset V}
not containing or adjacent to
i
{\displaystyle i}
,
X
i
⊥
⊥
X
J
|
X
V
∖
(
{
i
}
∪
J
)
{\displaystyle X_{i}\perp \!\!\!\perp X_{J}|X_{V\smallsetminus (\{i\}\cup J)}}
.
Global: For any
I
,
J
⊂
V
{\displaystyle I,J\subset V}
not intersecting or adjacent,
X
I
⊥
⊥
X
J
|
X
V
∖
(
I
∪
J
)
{\displaystyle X_{I}\perp \!\!\!\perp X_{J}|X_{V\smallsetminus (I\cup J)}}
.
== Clique factorization ==
As the Markov property of an arbitrary probability distribution can be difficult to establish, a commonly used class of Markov random fields are those that can be factorized according to the cliques of the graph.
Given a set of random variables
X
=
(
X
v
)
v
∈
V
{\displaystyle X=(X_{v})_{v\in V}}
, let
P
(
X
=
x
)
{\displaystyle P(X=x)}
be the probability of a particular field configuration
x
{\displaystyle x}
in
X
{\displaystyle X}
—that is,
P
(
X
=
x
)
{\displaystyle P(X=x)}
is the probability of finding that the random variables
X
{\displaystyle X}
take on the particular value
x
{\displaystyle x}
. Because
X
{\displaystyle X}
is a set, the probability of
x
{\displaystyle x}
should be understood to be taken with respect to a joint distribution of the
X
v
{\displaystyle X_{v}}
.
If this joint density can be factorized over the cliques of
G
{\displaystyle G}
as
P
(
X
=
x
)
=
∏
C
∈
cl
(
G
)
φ
C
(
x
C
)
{\displaystyle P(X=x)=\prod _{C\in \operatorname {cl} (G)}\varphi _{C}(x_{C})}
then
X
{\displaystyle X}
forms a Markov random field with respect to
G
{\displaystyle G}
. Here,
cl
(
G
)
{\displaystyle \operatorname {cl} (G)}
is the set of cliques of
G
{\displaystyle G}
. The definition is equivalent if only maximal cliques are used. The functions
φ
C
{\displaystyle \varphi _{C}}
are sometimes referred to as factor potentials or clique potentials. Note, however, conflicting terminology is in use: the word potential is often applied to the logarithm of
φ
C
{\displaystyle \varphi _{C}}
. This is because, in statistical mechanics,
log
(
φ
C
)
{\displaystyle \log(\varphi _{C})}
has a direct interpretation as the potential energy of a configuration
x
C
{\displaystyle x_{C}}
.
Some MRF's do not factorize: a simple example can be constructed on a cycle of 4 nodes with some infinite energies, i.e. configurations of zero probabilities, even if one, more appropriately, allows the infinite energies to act on the complete graph on
V
{\displaystyle V}
.
MRF's factorize if at least one of the following conditions is fulfilled:
the density is strictly positive (by the Hammersley–Clifford theorem)
the graph is chordal (by equivalence to a Bayesian network)
When such a factorization does exist, it is possible to construct a factor graph for the network.
== Exponential family ==
Any positive Markov random field can be written as exponential family in canonical form with feature functions
f
k
{\displaystyle f_{k}}
such that the full-joint distribution can be written as
P
(
X
=
x
)
=
1
Z
exp
(
∑
k
w
k
⊤
f
k
(
x
{
k
}
)
)
{\displaystyle P(X=x)={\frac {1}{Z}}\exp \left(\sum _{k}w_{k}^{\top }f_{k}(x_{\{k\}})\right)}
where the notation
w
k
⊤
f
k
(
x
{
k
}
)
=
∑
i
=
1
N
k
w
k
,
i
⋅
f
k
,
i
(
x
{
k
}
)
{\displaystyle w_{k}^{\top }f_{k}(x_{\{k\}})=\sum _{i=1}^{N_{k}}w_{k,i}\cdot f_{k,i}(x_{\{k\}})}
is simply a dot product over field configurations, and Z is the partition function:
Z
=
∑
x
∈
X
exp
(
∑
k
w
k
⊤
f
k
(
x
{
k
}
)
)
.
{\displaystyle Z=\sum _{x\in {\mathcal {X}}}\exp \left(\sum _{k}w_{k}^{\top }f_{k}(x_{\{k\}})\right).}
Here,
X
{\displaystyle {\mathcal {X}}}
denotes the set of all possible assignments of values to all the network's random variables. Usually, the feature functions
f
k
,
i
{\displaystyle f_{k,i}}
are defined such that they are indicators of the clique's configuration, i.e.
f
k
,
i
(
x
{
k
}
)
=
1
{\displaystyle f_{k,i}(x_{\{k\}})=1}
if
x
{
k
}
{\displaystyle x_{\{k\}}}
corresponds to the i-th possible configuration of the k-th clique and 0 otherwise. This model is equivalent to the clique factorization model given above, if
N
k
=
|
dom
(
C
k
)
|
{\displaystyle N_{k}=|\operatorname {dom} (C_{k})|}
is the cardinality of the clique, and the weight of a feature
f
k
,
i
{\displaystyle f_{k,i}}
corresponds to the logarithm of the corresponding clique factor, i.e.
w
k
,
i
=
log
φ
(
c
k
,
i
)
{\displaystyle w_{k,i}=\log \varphi (c_{k,i})}
, where
c
k
,
i
{\displaystyle c_{k,i}}
is the i-th possible configuration of the k-th clique, i.e. the i-th value in the domain of the clique
C
k
{\displaystyle C_{k}}
.
The probability P is often called the Gibbs measure. This expression of a Markov field as a logistic model is only possible if all clique factors are non-zero, i.e. if none of the elements of
X
{\displaystyle {\mathcal {X}}}
are assigned a probability of 0. This allows techniques from matrix algebra to be applied, e.g. that the trace of a matrix is log of the determinant, with the matrix representation of a graph arising from the graph's incidence matrix.
The importance of the partition function Z is that many concepts from statistical mechanics, such as entropy, directly generalize to the case of Markov networks, and an intuitive understanding can thereby be gained. In addition, the partition function allows variational methods to be applied to the solution of the problem: one can attach a driving force to one or more of the random variables, and explore the reaction of the network in response to this perturbation. Thus, for example, one may add a driving term Jv, for each vertex v of the graph, to the partition function to get:
Z
[
J
]
=
∑
x
∈
X
exp
(
∑
k
w
k
⊤
f
k
(
x
{
k
}
)
+
∑
v
J
v
x
v
)
{\displaystyle Z[J]=\sum _{x\in {\mathcal {X}}}\exp \left(\sum _{k}w_{k}^{\top }f_{k}(x_{\{k\}})+\sum _{v}J_{v}x_{v}\right)}
Formally differentiating with respect to Jv gives the expectation value of the random variable Xv associated with the vertex v:
E
[
X
v
]
=
1
Z
∂
Z
[
J
]
∂
J
v
|
J
v
=
0
.
{\displaystyle E[X_{v}]={\frac {1}{Z}}\left.{\frac {\partial Z[J]}{\partial J_{v}}}\right|_{J_{v}=0}.}
Correlation functions are computed likewise; the two-point correlation is:
C
[
X
u
,
X
v
]
=
1
Z
∂
2
Z
[
J
]
∂
J
u
∂
J
v
|
J
u
=
0
,
J
v
=
0
.
{\displaystyle C[X_{u},X_{v}]={\frac {1}{Z}}\left.{\frac {\partial ^{2}Z[J]}{\partial J_{u}\,\partial J_{v}}}\right|_{J_{u}=0,J_{v}=0}.}
Unfortunately, though the likelihood of a logistic Markov network is convex, evaluating the likelihood or gradient of the likelihood of a model requires inference in the model, which is generally computationally infeasible (see 'Inference' below).
== Examples ==
=== Gaussian ===
A multivariate normal distribution forms a Markov random field with respect to a graph
G
=
(
V
,
E
)
{\displaystyle G=(V,E)}
if the missing edges correspond to zeros on the precision matrix (the inverse covariance matrix):
X
=
(
X
v
)
v
∈
V
∼
N
(
μ
,
Σ
)
{\displaystyle X=(X_{v})_{v\in V}\sim {\mathcal {N}}({\boldsymbol {\mu }},\Sigma )}
such that
(
Σ
−
1
)
u
v
=
0
iff
{
u
,
v
}
∉
E
.
{\displaystyle (\Sigma ^{-1})_{uv}=0\quad {\text{iff}}\quad \{u,v\}\notin E.}
== Inference ==
As in a Bayesian network, one may calculate the conditional distribution of a set of nodes
V
′
=
{
v
1
,
…
,
v
i
}
{\displaystyle V'=\{v_{1},\ldots ,v_{i}\}}
given values to another set of nodes
W
′
=
{
w
1
,
…
,
w
j
}
{\displaystyle W'=\{w_{1},\ldots ,w_{j}\}}
in the Markov random field by summing over all possible assignments to
u
∉
V
′
,
W
′
{\displaystyle u\notin V',W'}
; this is called exact inference. However, exact inference is a #P-complete problem, and thus computationally intractable in the general case. Approximation techniques such as Markov chain Monte Carlo and loopy belief propagation are often more feasible in practice. Some particular subclasses of MRFs, such as trees (see Chow–Liu tree), have polynomial-time inference algorithms; discovering such subclasses is an active research topic. There are also subclasses of MRFs that permit efficient MAP, or most likely assignment, inference; examples of these include associative networks. Another interesting sub-class is the one of decomposable models (when the graph is chordal): having a closed-form for the MLE, it is possible to discover a consistent structure for hundreds of variables.
== Conditional random fields ==
One notable variant of a Markov random field is a conditional random field, in which each random variable may also be conditioned upon a set of global observations
o
{\displaystyle o}
. In this model, each function
φ
k
{\displaystyle \varphi _{k}}
is a mapping from all assignments to both the clique k and the observations
o
{\displaystyle o}
to the nonnegative real numbers. This form of the Markov network may be more appropriate for producing discriminative classifiers, which do not model the distribution over the observations. CRFs were proposed by John D. Lafferty, Andrew McCallum and Fernando C.N. Pereira in 2001.
== Varied applications ==
Markov random fields find application in a variety of fields, ranging from computer graphics to computer vision, machine learning or computational biology, and information retrieval. MRFs are used in image processing to generate textures as they can be used to generate flexible and stochastic image models. In image modelling, the task is to find a suitable intensity distribution of a given image, where suitability depends on the kind of task and MRFs are flexible enough to be used for image and texture synthesis, image compression and restoration, image segmentation, 3D image inference from 2D images, image registration, texture synthesis, super-resolution, stereo matching and information retrieval. They can be used to solve various computer vision problems which can be posed as energy minimization problems or problems where different regions have to be distinguished using a set of discriminating features, within a Markov random field framework, to predict the category of the region. Markov random fields were a generalization over the Ising model and have, since then, been used widely in combinatorial optimizations and networks.
== See also ==
== References == | Wikipedia/Markov_network |
In mathematics — specifically, in large deviations theory — the contraction principle is a theorem that states how a large deviation principle on one space "pushes forward" (via the pushforward of a probability measure) to a large deviation principle on another space via a continuous function.
== Statement ==
Let X and Y be Hausdorff topological spaces and let (με)ε>0 be a family of probability measures on X that satisfies the large deviation principle with rate function I : X → [0, +∞]. Let T : X → Y be a continuous function, and let νε = T∗(με) be the push-forward measure of με by T, i.e., for each measurable set/event E ⊆ Y, νε(E) = με(T−1(E)). Let
J
(
y
)
:=
inf
{
I
(
x
)
∣
x
∈
X
and
T
(
x
)
=
y
}
,
{\displaystyle J(y):=\inf\{I(x)\mid x\in X{\text{ and }}T(x)=y\},}
with the convention that the infimum of I over the empty set ∅ is +∞. Then:
J : Y → [0, +∞] is a rate function on Y,
J is a good rate function on Y if I is a good rate function on X, and
(νε)ε>0 satisfies the large deviation principle on Y with rate function J.
== References ==
Dembo, Amir; Zeitouni, Ofer (1998). Large deviations techniques and applications. Applications of Mathematics (New York) 38 (Second ed.). New York: Springer-Verlag. pp. xvi+396. ISBN 0-387-98406-2. MR 1619036. (See chapter 4.2.1)
den Hollander, Frank (2000). Large deviations. Fields Institute Monographs 14. Providence, RI: American Mathematical Society. pp. x+143. ISBN 0-8218-1989-5. MR 1739680. | Wikipedia/Contraction_principle_(large_deviations_theory) |
In the mathematical theory of random dynamical systems, an absorbing set is a subset of the phase space that exhibits a capturing property. It acts like a gravitational center, with the property that all trajectories of the system eventually enter and remain within that set.
Absorbing sets ultimately contain the transformed images of any initially bounded set as the system evolves over time. As with many concepts related to random dynamical systems, it is defined in the pullback sense, which means they are understood through their long-term behavior.
Absorbing sets are a key concept in the study of the long-term behavior of dynamical systems, particularly in the context of dissipative systems, as they provide a bound on the possible future states of the system. The existence and properties of absorbing sets are fundamental to establishing the existence of global attractors and understanding the asymptotic behavior of solutions.
== Definition ==
Consider a random dynamical system φ on a complete separable metric space (X, d), where the noise is chosen from a probability space (Ω, Σ, P) with base flow θ : R × Ω → Ω. A random compact set K : Ω → 2X is said to be absorbing if, for all d-bounded deterministic sets B ⊆ X, there exists a (finite) random time τB : Ω → 0, +∞) such that
t
≥
τ
B
(
ω
)
⟹
φ
(
t
,
θ
−
t
ω
)
B
⊆
K
(
ω
)
.
{\displaystyle t\geq \tau _{B}(\omega )\implies \varphi (t,\theta _{-t}\omega )B\subseteq K(\omega ).}
This is a definition in the pullback sense, as indicated by the use of the negative time shift θ−t.
== See also ==
Glossary of areas of mathematics
Lists of mathematics topics
Mathematics Subject Classification
Outline of mathematics
== References ==
Robinson, James C.; Tearne, Oliver M. (2005). "Boundaries of attractors of omega limit sets". Stoch. Dyn. 5 (1): 97–109. doi:10.1142/S0219493705001304. ISSN 0219-4937. MR 2118757. (See footnote (e) on p. 104) | Wikipedia/Absorbing_set_(random_dynamical_systems) |
In mathematics, the second moment method is a technique used in probability theory and analysis to show that a random variable has positive probability of being positive. More generally, the "moment method" consists of bounding the probability that a random variable fluctuates far from its mean, by using its moments.
The method is often quantitative, in that one can often deduce a lower bound on the probability that the random variable is larger than some constant times its expectation. The method involves comparing the second moment of random variables to the square of the first moment.
== First moment method ==
The first moment method is a simple application of Markov's inequality for integer-valued variables. For a non-negative, integer-valued random variable X, we may want to prove that X = 0 with high probability. To obtain an upper bound for Pr(X > 0), and thus a lower bound for Pr(X = 0), we first note that since X takes only integer values, Pr(X > 0) = Pr(X ≥ 1). Since X is non-negative we can now apply Markov's inequality to obtain Pr(X ≥ 1) ≤ E[X]. Combining these we have Pr(X > 0) ≤ E[X]; the first moment method is simply the use of this inequality.
== Second moment method ==
In the other direction, E[X] being "large" does not directly imply that Pr(X = 0) is small. However, we can often use the second moment to derive such a conclusion, using the Cauchy–Schwarz inequality.
The method can also be used on distributional limits of random variables. Furthermore, the estimate of the previous theorem can be refined by means of the so-called Paley–Zygmund inequality. Suppose that Xn is a sequence of non-negative real-valued random variables which converge in law to a random variable X. If there are finite positive constants c1, c2 such that
E
[
X
n
2
]
≤
c
1
E
[
X
n
]
2
E
[
X
n
]
≥
c
2
{\displaystyle {\begin{aligned}\operatorname {E} \left[X_{n}^{2}\right]&\leq c_{1}\operatorname {E} [X_{n}]^{2}\\\operatorname {E} \left[X_{n}\right]&\geq c_{2}\end{aligned}}}
hold for every n, then it follows from the Paley–Zygmund inequality that for every n and θ in (0, 1)
Pr
(
X
n
≥
c
2
θ
)
≥
(
1
−
θ
)
2
c
1
.
{\displaystyle \Pr(X_{n}\geq c_{2}\theta )\geq {\frac {(1-\theta )^{2}}{c_{1}}}.}
Consequently, the same inequality is satisfied by X.
== Example application of method ==
=== Setup of problem ===
The Bernoulli bond percolation subgraph of a graph G at parameter p is a random subgraph obtained from G by deleting every edge of G with probability 1−p, independently. The infinite complete binary tree T is an infinite tree where one vertex (called the root) has two neighbors and every other vertex has three neighbors. The second moment method can be used to show that at every parameter p ∈ (1/2, 1] with positive probability the connected component of the root in the percolation subgraph of T is infinite.
=== Application of method ===
Let K be the percolation component of the root, and let Tn be the set of vertices of T that are at distance n from the root. Let Xn be the number of vertices in Tn ∩ K.
To prove that K is infinite with positive probability, it is enough to show that
Pr
(
X
n
>
0
∀
n
)
>
0
{\displaystyle \Pr(X_{n}>0\ \ \forall n)>0}
. Since the events
{
X
n
>
0
}
{\displaystyle \{X_{n}>0\}}
form a decreasing sequence, by continuity of probability measures this is equivalent to showing that
inf
n
Pr
(
X
n
>
0
)
>
0
{\displaystyle \inf _{n}\Pr(X_{n}>0)>0}
.
The Cauchy–Schwarz inequality gives
E
[
X
n
]
2
≤
E
[
X
n
2
]
E
[
(
1
X
n
>
0
)
2
]
=
E
[
X
n
2
]
Pr
(
X
n
>
0
)
.
{\displaystyle \operatorname {E} [X_{n}]^{2}\leq \operatorname {E} [X_{n}^{2}]\,\operatorname {E} \left[(1_{X_{n}>0})^{2}\right]=\operatorname {E} [X_{n}^{2}]\,\Pr(X_{n}>0).}
Therefore, it is sufficient to show that
inf
n
E
[
X
n
]
2
E
[
X
n
2
]
>
0
,
{\displaystyle \inf _{n}{\frac {\operatorname {E} \left[X_{n}\right]^{2}}{\operatorname {E} \left[X_{n}^{2}\right]}}>0\,,}
that is, that the second moment is bounded from above by a constant times the first moment squared (and both are nonzero). In many applications of the second moment method, one is not able to calculate the moments precisely, but can nevertheless establish this inequality.
In this particular application, these moments can be calculated. For every specific v in Tn,
Pr
(
v
∈
K
)
=
p
n
.
{\displaystyle \Pr(v\in K)=p^{n}.}
Since
|
T
n
|
=
2
n
{\displaystyle |T_{n}|=2^{n}}
, it follows that
E
[
X
n
]
=
2
n
p
n
{\displaystyle \operatorname {E} [X_{n}]=2^{n}\,p^{n}}
which is the first moment. Now comes the second moment calculation.
E
[
X
n
2
]
=
E
[
∑
v
∈
T
n
∑
u
∈
T
n
1
v
∈
K
1
u
∈
K
]
=
∑
v
∈
T
n
∑
u
∈
T
n
Pr
(
v
,
u
∈
K
)
.
{\displaystyle \operatorname {E} \!\left[X_{n}^{2}\right]=\operatorname {E} \!\left[\sum _{v\in T_{n}}\sum _{u\in T_{n}}1_{v\in K}\,1_{u\in K}\right]=\sum _{v\in T_{n}}\sum _{u\in T_{n}}\Pr(v,u\in K).}
For each pair v, u in Tn let w(v, u) denote the vertex in T that is farthest away from the root and lies on the simple path in T to each of the two vertices v and u, and let k(v, u) denote the distance from w to the root. In order for v, u to both be in K, it is necessary and sufficient for the three simple paths from w(v, u) to v, u and the root to be in K. Since the number of edges contained in the union of these three paths is 2n − k(v, u), we obtain
Pr
(
v
,
u
∈
K
)
=
p
2
n
−
k
(
v
,
u
)
.
{\displaystyle \Pr(v,u\in K)=p^{2n-k(v,u)}.}
The number of pairs (v, u) such that k(v, u) = s is equal to
2
s
2
n
−
s
2
n
−
s
−
1
=
2
2
n
−
s
−
1
{\displaystyle 2^{s}\,2^{n-s}\,2^{n-s-1}=2^{2n-s-1}}
, for
s
=
0
,
1
,
…
,
n
−
1
{\displaystyle s=0,1,\dots ,n-1}
and equal to
2
n
{\displaystyle 2^{n}}
for
s
=
n
{\displaystyle s=n}
. Hence, for
p
>
1
2
{\displaystyle p>{\frac {1}{2}}}
,
E
[
X
n
2
]
=
(
2
p
)
n
+
∑
s
=
0
n
−
1
2
2
n
−
s
−
1
p
2
n
−
s
=
(
2
p
)
n
+
1
−
2
(
2
p
)
n
+
(
2
p
)
2
n
+
1
4
p
−
2
,
{\displaystyle \operatorname {E} [X_{n}^{2}]=(2p)^{n}+\sum _{s=0}^{n-1}2^{2n-s-1}p^{2n-s}={\frac {(2p)^{n+1}-2(2p)^{n}+(2p)^{2n+1}}{4p-2}},}
so that
(
E
[
X
n
]
)
2
E
[
X
n
2
]
=
4
p
−
2
(
2
p
)
1
−
n
−
2
(
2
p
)
−
n
+
2
p
→
2
−
1
p
>
0
,
{\displaystyle {\frac {(\operatorname {E} [X_{n}])^{2}}{\operatorname {E} [X_{n}^{2}]}}={\frac {4p-2}{(2p)^{1-n}-2(2p)^{-n}+2p}}\to 2-{\frac {1}{p}}>0,}
which completes the proof.
=== Discussion ===
The choice of the random variables Xn was rather natural in this setup. In some more difficult applications of the method, some ingenuity might be required in order to choose the random variables Xn for which the argument can be carried through.
The Paley–Zygmund inequality is sometimes used instead of the Cauchy–Schwarz inequality and may occasionally give more refined results.
Under the (incorrect) assumption that the events v, u in K are always independent, one has
Pr
(
v
,
u
∈
K
)
=
Pr
(
v
∈
K
)
Pr
(
u
∈
K
)
{\displaystyle \Pr(v,u\in K)=\Pr(v\in K)\,\Pr(u\in K)}
, and the second moment is equal to the first moment squared. The second moment method typically works in situations in which the corresponding events or random variables are “nearly independent".
In this application, the random variables Xn are given as sums
X
n
=
∑
v
∈
T
n
1
v
∈
K
.
{\displaystyle X_{n}=\sum _{v\in T_{n}}1_{v\in K}.}
In other applications, the corresponding useful random variables are integrals
X
n
=
∫
f
n
(
t
)
d
μ
(
t
)
,
{\displaystyle X_{n}=\int f_{n}(t)\,d\mu (t),}
where the functions fn are random. In such a situation, one considers the product measure μ × μ and calculates
E
[
X
n
2
]
=
E
[
∬
f
n
(
x
)
f
n
(
y
)
d
μ
(
x
)
d
μ
(
y
)
]
=
E
[
∬
E
[
f
n
(
x
)
f
n
(
y
)
]
d
μ
(
x
)
d
μ
(
y
)
]
,
{\displaystyle {\begin{aligned}\operatorname {E} \left[X_{n}^{2}\right]&=\operatorname {E} \left[\iint f_{n}(x)\,f_{n}(y)\,d\mu (x)\,d\mu (y)\right]\\&=\operatorname {E} \left[\iint \operatorname {E} \left[f_{n}(x)\,f_{n}(y)\right]\,d\mu (x)\,d\mu (y)\right],\end{aligned}}}
where the last step is typically justified using Fubini's theorem.
== References == | Wikipedia/Second_moment_method |
The Gaussian integral, also known as the Euler–Poisson integral, is the integral of the Gaussian function
f
(
x
)
=
e
−
x
2
{\displaystyle f(x)=e^{-x^{2}}}
over the entire real line. Named after the German mathematician Carl Friedrich Gauss, the integral is
∫
−
∞
∞
e
−
x
2
d
x
=
π
.
{\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx={\sqrt {\pi }}.}
Abraham de Moivre originally discovered this type of integral in 1733, while Gauss published the precise integral in 1809, attributing its discovery to Laplace. The integral has a wide range of applications. For example, with a slight change of variables it is used to compute the normalizing constant of the normal distribution. The same integral with finite limits is closely related to both the error function and the cumulative distribution function of the normal distribution. In physics this type of integral appears frequently, for example, in quantum mechanics, to find the probability density of the ground state of the harmonic oscillator. This integral is also used in the path integral formulation, to find the propagator of the harmonic oscillator, and in statistical mechanics, to find its partition function.
Although no elementary function exists for the error function, as can be proven by the Risch algorithm, the Gaussian integral can be solved analytically through the methods of multivariable calculus. That is, there is no elementary indefinite integral for
∫
e
−
x
2
d
x
,
{\displaystyle \int e^{-x^{2}}\,dx,}
but the definite integral
∫
−
∞
∞
e
−
x
2
d
x
{\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx}
can be evaluated. The definite integral of an arbitrary Gaussian function is
∫
−
∞
∞
e
−
a
(
x
+
b
)
2
d
x
=
π
a
.
{\displaystyle \int _{-\infty }^{\infty }e^{-a(x+b)^{2}}\,dx={\sqrt {\frac {\pi }{a}}}.}
== Computation ==
=== By polar coordinates ===
A standard way to compute the Gaussian integral, the idea of which goes back to Poisson, is to make use of the property that:
(
∫
−
∞
∞
e
−
x
2
d
x
)
2
=
∫
−
∞
∞
e
−
x
2
d
x
∫
−
∞
∞
e
−
y
2
d
y
=
∫
−
∞
∞
∫
−
∞
∞
e
−
(
x
2
+
y
2
)
d
x
d
y
.
{\displaystyle \left(\int _{-\infty }^{\infty }e^{-x^{2}}\,dx\right)^{2}=\int _{-\infty }^{\infty }e^{-x^{2}}\,dx\int _{-\infty }^{\infty }e^{-y^{2}}\,dy=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }e^{-\left(x^{2}+y^{2}\right)}\,dx\,dy.}
Consider the function
e
−
(
x
2
+
y
2
)
=
e
−
r
2
{\displaystyle e^{-\left(x^{2}+y^{2}\right)}=e^{-r^{2}}}
on the plane
R
2
{\displaystyle \mathbb {R} ^{2}}
, and compute its integral two ways:
on the one hand, by double integration in the Cartesian coordinate system, its integral is a square:
(
∫
e
−
x
2
d
x
)
2
;
{\displaystyle \left(\int e^{-x^{2}}\,dx\right)^{2};}
on the other hand, by shell integration (a case of double integration in polar coordinates), its integral is computed to be
π
{\displaystyle \pi }
Comparing these two computations yields the integral, though one should take care about the improper integrals involved.
∬
R
2
e
−
(
x
2
+
y
2
)
d
x
d
y
=
∫
0
2
π
∫
0
∞
e
−
r
2
r
d
r
d
θ
=
2
π
∫
0
∞
r
e
−
r
2
d
r
=
2
π
∫
−
∞
0
1
2
e
s
d
s
s
=
−
r
2
=
π
∫
−
∞
0
e
s
d
s
=
π
[
e
s
]
−
∞
0
=
π
(
e
0
−
e
−
∞
)
=
π
(
1
−
0
)
=
π
,
{\displaystyle {\begin{aligned}\iint _{\mathbb {R} ^{2}}e^{-\left(x^{2}+y^{2}\right)}dx\,dy&=\int _{0}^{2\pi }\int _{0}^{\infty }e^{-r^{2}}r\,dr\,d\theta \\[6pt]&=2\pi \int _{0}^{\infty }re^{-r^{2}}\,dr\\[6pt]&=2\pi \int _{-\infty }^{0}{\tfrac {1}{2}}e^{s}\,ds&&s=-r^{2}\\[6pt]&=\pi \int _{-\infty }^{0}e^{s}\,ds\\[6pt]&=\pi \,\left[e^{s}\right]_{-\infty }^{0}\\[6pt]&=\pi \,\left(e^{0}-e^{-\infty }\right)\\[6pt]&=\pi \,\left(1-0\right)\\[6pt]&=\pi ,\end{aligned}}}
where the factor of r is the Jacobian determinant which appears because of the transform to polar coordinates (r dr dθ is the standard measure on the plane, expressed in polar coordinates Wikibooks:Calculus/Polar Integration#Generalization), and the substitution involves taking s = −r2, so ds = −2r dr.
Combining these yields
(
∫
−
∞
∞
e
−
x
2
d
x
)
2
=
π
,
{\displaystyle \left(\int _{-\infty }^{\infty }e^{-x^{2}}\,dx\right)^{2}=\pi ,}
so
∫
−
∞
∞
e
−
x
2
d
x
=
π
.
{\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx={\sqrt {\pi }}.}
==== Complete proof ====
To justify the improper double integrals and equating the two expressions, we begin with an approximating function:
I
(
a
)
=
∫
−
a
a
e
−
x
2
d
x
.
{\displaystyle I(a)=\int _{-a}^{a}e^{-x^{2}}dx.}
If the integral
∫
−
∞
∞
e
−
x
2
d
x
{\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx}
were absolutely convergent we would have that its Cauchy principal value, that is, the limit
lim
a
→
∞
I
(
a
)
{\displaystyle \lim _{a\to \infty }I(a)}
would coincide with
∫
−
∞
∞
e
−
x
2
d
x
.
{\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx.}
To see that this is the case, consider that
∫
−
∞
∞
|
e
−
x
2
|
d
x
<
∫
−
∞
−
1
−
x
e
−
x
2
d
x
+
∫
−
1
1
e
−
x
2
d
x
+
∫
1
∞
x
e
−
x
2
d
x
<
∞
.
{\displaystyle \int _{-\infty }^{\infty }\left|e^{-x^{2}}\right|dx<\int _{-\infty }^{-1}-xe^{-x^{2}}\,dx+\int _{-1}^{1}e^{-x^{2}}\,dx+\int _{1}^{\infty }xe^{-x^{2}}\,dx<\infty .}
So we can compute
∫
−
∞
∞
e
−
x
2
d
x
{\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx}
by just taking the limit
lim
a
→
∞
I
(
a
)
.
{\displaystyle \lim _{a\to \infty }I(a).}
Taking the square of
I
(
a
)
{\displaystyle I(a)}
yields
I
(
a
)
2
=
(
∫
−
a
a
e
−
x
2
d
x
)
(
∫
−
a
a
e
−
y
2
d
y
)
=
∫
−
a
a
(
∫
−
a
a
e
−
y
2
d
y
)
e
−
x
2
d
x
=
∫
−
a
a
∫
−
a
a
e
−
(
x
2
+
y
2
)
d
y
d
x
.
{\displaystyle {\begin{aligned}I(a)^{2}&=\left(\int _{-a}^{a}e^{-x^{2}}\,dx\right)\left(\int _{-a}^{a}e^{-y^{2}}\,dy\right)\\[6pt]&=\int _{-a}^{a}\left(\int _{-a}^{a}e^{-y^{2}}\,dy\right)\,e^{-x^{2}}\,dx\\[6pt]&=\int _{-a}^{a}\int _{-a}^{a}e^{-\left(x^{2}+y^{2}\right)}\,dy\,dx.\end{aligned}}}
Using Fubini's theorem, the above double integral can be seen as an area integral
∬
[
−
a
,
a
]
×
[
−
a
,
a
]
e
−
(
x
2
+
y
2
)
d
(
x
,
y
)
,
{\displaystyle \iint _{[-a,a]\times [-a,a]}e^{-\left(x^{2}+y^{2}\right)}\,d(x,y),}
taken over a square with vertices {(−a, a), (a, a), (a, −a), (−a, −a)} on the xy-plane.
Since the exponential function is greater than 0 for all real numbers, it then follows that the integral taken over the square's incircle must be less than
I
(
a
)
2
{\displaystyle I(a)^{2}}
, and similarly the integral taken over the square's circumcircle must be greater than
I
(
a
)
2
{\displaystyle I(a)^{2}}
. The integrals over the two disks can easily be computed by switching from Cartesian coordinates to polar coordinates:
x
=
r
cos
θ
,
y
=
r
sin
θ
{\displaystyle {\begin{aligned}x&=r\cos \theta ,&y&=r\sin \theta \end{aligned}}}
J
(
r
,
θ
)
=
[
∂
x
∂
r
∂
x
∂
θ
∂
y
∂
r
∂
y
∂
θ
]
=
[
cos
θ
−
r
sin
θ
sin
θ
−
r
cos
θ
]
{\displaystyle \mathbf {J} (r,\theta )={\begin{bmatrix}{\dfrac {\partial x}{\partial r}}&{\dfrac {\partial x}{\partial \theta }}\\[1em]{\dfrac {\partial y}{\partial r}}&{\dfrac {\partial y}{\partial \theta }}\end{bmatrix}}={\begin{bmatrix}\cos \theta &-r\sin \theta \\\sin \theta &{\hphantom {-}}r\cos \theta \end{bmatrix}}}
d
(
x
,
y
)
=
|
J
(
r
,
θ
)
|
d
(
r
,
θ
)
=
r
d
(
r
,
θ
)
.
{\displaystyle d(x,y)=\left|J(r,\theta )\right|d(r,\theta )=r\,d(r,\theta ).}
∫
0
2
π
∫
0
a
r
e
−
r
2
d
r
d
θ
<
I
2
(
a
)
<
∫
0
2
π
∫
0
a
2
r
e
−
r
2
d
r
d
θ
.
{\displaystyle \int _{0}^{2\pi }\int _{0}^{a}re^{-r^{2}}\,dr\,d\theta <I^{2}(a)<\int _{0}^{2\pi }\int _{0}^{a{\sqrt {2}}}re^{-r^{2}}\,dr\,d\theta .}
(See to polar coordinates from Cartesian coordinates for help with polar transformation.)
Integrating,
π
(
1
−
e
−
a
2
)
<
I
2
(
a
)
<
π
(
1
−
e
−
2
a
2
)
.
{\displaystyle \pi \left(1-e^{-a^{2}}\right)<I^{2}(a)<\pi \left(1-e^{-2a^{2}}\right).}
By the squeeze theorem, this gives the Gaussian integral
∫
−
∞
∞
e
−
x
2
d
x
=
π
.
{\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx={\sqrt {\pi }}.}
=== By Cartesian coordinates ===
A different technique, which goes back to Laplace (1812), is the following. Let
y
=
x
s
d
y
=
x
d
s
.
{\displaystyle {\begin{aligned}y&=xs\\dy&=x\,ds.\end{aligned}}}
Since the limits on s as y → ±∞ depend on the sign of x, it simplifies the calculation to use the fact that e−x2 is an even function, and, therefore, the integral over all real numbers is just twice the integral from zero to infinity. That is,
∫
−
∞
∞
e
−
x
2
d
x
=
2
∫
0
∞
e
−
x
2
d
x
.
{\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx=2\int _{0}^{\infty }e^{-x^{2}}\,dx.}
Thus, over the range of integration, x ≥ 0, and the variables y and s have the same limits. This yields:
I
2
=
4
∫
0
∞
∫
0
∞
e
−
(
x
2
+
y
2
)
d
y
d
x
=
4
∫
0
∞
(
∫
0
∞
e
−
(
x
2
+
y
2
)
d
y
)
d
x
=
4
∫
0
∞
(
∫
0
∞
e
−
x
2
(
1
+
s
2
)
x
d
s
)
d
x
{\displaystyle {\begin{aligned}I^{2}&=4\int _{0}^{\infty }\int _{0}^{\infty }e^{-\left(x^{2}+y^{2}\right)}dy\,dx\\[6pt]&=4\int _{0}^{\infty }\left(\int _{0}^{\infty }e^{-\left(x^{2}+y^{2}\right)}\,dy\right)\,dx\\[6pt]&=4\int _{0}^{\infty }\left(\int _{0}^{\infty }e^{-x^{2}\left(1+s^{2}\right)}x\,ds\right)\,dx\\[6pt]\end{aligned}}}
Then, using Fubini's theorem to switch the order of integration:
I
2
=
4
∫
0
∞
(
∫
0
∞
e
−
x
2
(
1
+
s
2
)
x
d
x
)
d
s
=
4
∫
0
∞
[
e
−
x
2
(
1
+
s
2
)
−
2
(
1
+
s
2
)
]
x
=
0
x
=
∞
d
s
=
4
(
1
2
∫
0
∞
d
s
1
+
s
2
)
=
2
arctan
(
s
)
|
0
∞
=
π
.
{\displaystyle {\begin{aligned}I^{2}&=4\int _{0}^{\infty }\left(\int _{0}^{\infty }e^{-x^{2}\left(1+s^{2}\right)}x\,dx\right)\,ds\\[6pt]&=4\int _{0}^{\infty }\left[{\frac {e^{-x^{2}\left(1+s^{2}\right)}}{-2\left(1+s^{2}\right)}}\right]_{x=0}^{x=\infty }\,ds\\[6pt]&=4\left({\frac {1}{2}}\int _{0}^{\infty }{\frac {ds}{1+s^{2}}}\right)\\[6pt]&=2\arctan(s){\Big |}_{0}^{\infty }\\[6pt]&=\pi .\end{aligned}}}
Therefore,
I
=
π
{\displaystyle I={\sqrt {\pi }}}
, as expected.
=== By Laplace's method ===
In Laplace approximation, we deal only with up to second-order terms in Taylor expansion, so we consider
e
−
x
2
≈
1
−
x
2
≈
(
1
+
x
2
)
−
1
{\displaystyle e^{-x^{2}}\approx 1-x^{2}\approx (1+x^{2})^{-1}}
.
In fact, since
(
1
+
t
)
e
−
t
≤
1
{\displaystyle (1+t)e^{-t}\leq 1}
for all
t
{\displaystyle t}
, we have the exact bounds:
1
−
x
2
≤
e
−
x
2
≤
(
1
+
x
2
)
−
1
{\displaystyle 1-x^{2}\leq e^{-x^{2}}\leq (1+x^{2})^{-1}}
Then we can do the bound at Laplace approximation limit:
∫
[
−
1
,
1
]
(
1
−
x
2
)
n
d
x
≤
∫
[
−
1
,
1
]
e
−
n
x
2
d
x
≤
∫
[
−
1
,
1
]
(
1
+
x
2
)
−
n
d
x
{\displaystyle \int _{[-1,1]}(1-x^{2})^{n}dx\leq \int _{[-1,1]}e^{-nx^{2}}dx\leq \int _{[-1,1]}(1+x^{2})^{-n}dx}
That is,
2
n
∫
[
0
,
1
]
(
1
−
x
2
)
n
d
x
≤
∫
[
−
n
,
n
]
e
−
x
2
d
x
≤
2
n
∫
[
0
,
1
]
(
1
+
x
2
)
−
n
d
x
{\displaystyle 2{\sqrt {n}}\int _{[0,1]}(1-x^{2})^{n}dx\leq \int _{[-{\sqrt {n}},{\sqrt {n}}]}e^{-x^{2}}dx\leq 2{\sqrt {n}}\int _{[0,1]}(1+x^{2})^{-n}dx}
By trigonometric substitution, we exactly compute those two bounds:
2
n
(
2
n
)
!
!
/
(
2
n
+
1
)
!
!
{\displaystyle 2{\sqrt {n}}(2n)!!/(2n+1)!!}
and
2
n
(
π
/
2
)
(
2
n
−
3
)
!
!
/
(
2
n
−
2
)
!
!
{\displaystyle 2{\sqrt {n}}(\pi /2)(2n-3)!!/(2n-2)!!}
By taking the square root of the Wallis formula,
π
2
=
∏
n
=
1
(
2
n
)
2
(
2
n
−
1
)
(
2
n
+
1
)
{\displaystyle {\frac {\pi }{2}}=\prod _{n=1}{\frac {(2n)^{2}}{(2n-1)(2n+1)}}}
we have
π
=
2
lim
n
→
∞
n
(
2
n
)
!
!
(
2
n
+
1
)
!
!
{\displaystyle {\sqrt {\pi }}=2\lim _{n\to \infty }{\sqrt {n}}{\frac {(2n)!!}{(2n+1)!!}}}
, the desired lower bound limit. Similarly we can get the desired upper bound limit.
Conversely, if we first compute the integral with one of the other methods above, we would obtain a proof of the Wallis formula.
== Relation to the gamma function ==
The integrand is an even function,
∫
−
∞
∞
e
−
x
2
d
x
=
2
∫
0
∞
e
−
x
2
d
x
{\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}dx=2\int _{0}^{\infty }e^{-x^{2}}dx}
Thus, after the change of variable
x
=
t
{\textstyle x={\sqrt {t}}}
, this turns into the Euler integral
2
∫
0
∞
e
−
x
2
d
x
=
2
∫
0
∞
1
2
e
−
t
t
−
1
2
d
t
=
Γ
(
1
2
)
=
π
{\displaystyle 2\int _{0}^{\infty }e^{-x^{2}}dx=2\int _{0}^{\infty }{\frac {1}{2}}\ e^{-t}\ t^{-{\frac {1}{2}}}dt=\Gamma {\left({\frac {1}{2}}\right)}={\sqrt {\pi }}}
where
Γ
(
z
)
=
∫
0
∞
t
z
−
1
e
−
t
d
t
{\textstyle \Gamma (z)=\int _{0}^{\infty }t^{z-1}e^{-t}dt}
is the gamma function. This shows why the factorial of a half-integer is a rational multiple of
π
{\textstyle {\sqrt {\pi }}}
. More generally,
∫
0
∞
x
n
e
−
a
x
b
d
x
=
Γ
(
(
n
+
1
)
/
b
)
b
a
(
n
+
1
)
/
b
,
{\displaystyle \int _{0}^{\infty }x^{n}e^{-ax^{b}}dx={\frac {\Gamma {\left((n+1)/b\right)}}{ba^{(n+1)/b}}},}
which can be obtained by substituting
t
=
a
x
b
{\displaystyle t=ax^{b}}
in the integrand of the gamma function to get
Γ
(
z
)
=
a
z
b
∫
0
∞
x
b
z
−
1
e
−
a
x
b
d
x
{\textstyle \Gamma (z)=a^{z}b\int _{0}^{\infty }x^{bz-1}e^{-ax^{b}}dx}
.
== Generalizations ==
=== The integral of a Gaussian function ===
The integral of an arbitrary Gaussian function is
∫
−
∞
∞
e
−
a
(
x
+
b
)
2
d
x
=
π
a
.
{\displaystyle \int _{-\infty }^{\infty }e^{-a(x+b)^{2}}\,dx={\sqrt {\frac {\pi }{a}}}.}
An alternative form is
∫
−
∞
∞
e
−
(
a
x
2
+
b
x
+
c
)
d
x
=
π
a
e
b
2
4
a
−
c
.
{\displaystyle \int _{-\infty }^{\infty }e^{-(ax^{2}+bx+c)}\,dx={\sqrt {\frac {\pi }{a}}}\,e^{{\frac {b^{2}}{4a}}-c}.}
This form is useful for calculating expectations of some continuous probability distributions related to the normal distribution, such as the log-normal distribution, for example.
=== Complex form ===
∫
−
∞
∞
e
1
2
i
t
2
d
t
=
e
i
π
/
4
2
π
{\displaystyle \int _{-\infty }^{\infty }e^{{\frac {1}{2}}it^{2}}dt=e^{i\pi /4}{\sqrt {2\pi }}}
and more generally,
∫
R
N
e
1
2
i
x
T
A
x
d
x
=
det
(
A
)
−
1
2
(
e
i
π
/
4
2
π
)
N
{\displaystyle \int _{\mathbb {R} ^{N}}e^{{\frac {1}{2}}i\mathbf {x} ^{T}A\mathbf {x} }dx=\det(A)^{-{\frac {1}{2}}}{\left(e^{i\pi /4}{\sqrt {2\pi }}\right)}^{N}}
for any positive-definite symmetric matrix
A
{\displaystyle A}
.
=== n-dimensional and functional generalization ===
Suppose A is a symmetric positive-definite (hence invertible) n × n precision matrix, which is the matrix inverse of the covariance matrix. Then,
∫
R
n
exp
(
−
1
2
x
T
A
x
)
d
n
x
=
∫
R
n
exp
(
−
1
2
∑
i
,
j
=
1
n
A
i
j
x
i
x
j
)
d
n
x
=
(
2
π
)
n
det
A
=
1
det
(
A
/
2
π
)
=
det
(
2
π
A
−
1
)
{\displaystyle {\begin{aligned}\int _{\mathbb {R} ^{n}}\exp {\left(-{\frac {1}{2}}\mathbf {x} ^{\mathsf {T}}A\mathbf {x} \right)}\,d^{n}\mathbf {x} &=\int _{\mathbb {R} ^{n}}\exp {\left(-{\frac {1}{2}}\sum \limits _{i,j=1}^{n}A_{ij}x_{i}x_{j}\right)}\,d^{n}\mathbf {x} \\[1ex]&={\sqrt {\frac {{\left(2\pi \right)}^{n}}{\det A}}}={\sqrt {\frac {1}{\det \left(A/2\pi \right)}}}\\[1ex]&={\sqrt {\det \left(2\pi A^{-1}\right)}}\end{aligned}}}
By completing the square, this generalizes to
∫
R
n
exp
(
−
1
2
x
T
A
x
+
b
T
x
+
c
)
d
n
x
=
det
(
2
π
A
−
1
)
exp
(
1
2
b
T
A
−
1
b
+
c
)
{\displaystyle \int _{\mathbb {R} ^{n}}\exp {\left(-{\tfrac {1}{2}}\mathbf {x} ^{\mathsf {T}}A\mathbf {x} +\mathbf {b} ^{\mathsf {T}}\mathbf {x} +c\right)}\,d^{n}\mathbf {x} ={\sqrt {\det \left(2\pi A^{-1}\right)}}\exp \left({\tfrac {1}{2}}\mathbf {b} ^{\mathsf {T}}A^{-1}\mathbf {b} +c\right)}
This fact is applied in the study of the multivariate normal distribution.
Also,
∫
x
k
1
⋯
x
k
2
N
exp
(
−
1
2
∑
i
,
j
=
1
n
A
i
j
x
i
x
j
)
d
n
x
=
(
2
π
)
n
det
A
1
2
N
N
!
∑
σ
∈
S
2
N
(
A
−
1
)
k
σ
(
1
)
k
σ
(
2
)
⋯
(
A
−
1
)
k
σ
(
2
N
−
1
)
k
σ
(
2
N
)
{\displaystyle \int x_{k_{1}}\cdots x_{k_{2N}}\,\exp {\left(-{\frac {1}{2}}\sum \limits _{i,j=1}^{n}A_{ij}x_{i}x_{j}\right)}\,d^{n}x={\sqrt {\frac {(2\pi )^{n}}{\det A}}}\,{\frac {1}{2^{N}N!}}\,\sum _{\sigma \in S_{2N}}(A^{-1})_{k_{\sigma (1)}k_{\sigma (2)}}\cdots (A^{-1})_{k_{\sigma (2N-1)}k_{\sigma (2N)}}}
where σ is a permutation of {1, …, 2N} and the extra factor on the right-hand side is the sum over all combinatorial pairings of {1, …, 2N} of N copies of A−1.
Alternatively,
∫
f
(
x
)
exp
(
−
1
2
∑
i
,
j
=
1
n
A
i
j
x
i
x
j
)
d
n
x
=
(
2
π
)
n
det
A
exp
(
1
2
∑
i
,
j
=
1
n
(
A
−
1
)
i
j
∂
∂
x
i
∂
∂
x
j
)
f
(
x
)
|
x
=
0
{\displaystyle \int f(\mathbf {x} )\exp {\left(-{\frac {1}{2}}\sum _{i,j=1}^{n}A_{ij}x_{i}x_{j}\right)}d^{n}\mathbf {x} ={\sqrt {\frac {{\left(2\pi \right)}^{n}}{\det A}}}\,\left.\exp \left({\frac {1}{2}}\sum _{i,j=1}^{n}\left(A^{-1}\right)_{ij}{\partial \over \partial x_{i}}{\partial \over \partial x_{j}}\right)f(\mathbf {x} )\right|_{\mathbf {x} =0}}
for some analytic function f, provided it satisfies some appropriate bounds on its growth and some other technical criteria. (It works for some functions and fails for others. Polynomials are fine.) The exponential over a differential operator is understood as a power series.
While functional integrals have no rigorous definition (or even a nonrigorous computational one in most cases), we can define a Gaussian functional integral in analogy to the finite-dimensional case. There is still the problem, though, that
(
2
π
)
∞
{\displaystyle (2\pi )^{\infty }}
is infinite and also, the functional determinant would also be infinite in general. This can be taken care of if we only consider ratios:
∫
f
(
x
1
)
⋯
f
(
x
2
N
)
exp
[
−
∬
1
2
A
(
x
2
N
+
1
,
x
2
N
+
2
)
f
(
x
2
N
+
1
)
f
(
x
2
N
+
2
)
d
d
x
2
N
+
1
d
d
x
2
N
+
2
]
D
f
∫
exp
[
−
∬
1
2
A
(
x
2
N
+
1
,
x
2
N
+
2
)
f
(
x
2
N
+
1
)
f
(
x
2
N
+
2
)
d
d
x
2
N
+
1
d
d
x
2
N
+
2
]
D
f
=
1
2
N
N
!
∑
σ
∈
S
2
N
A
−
1
(
x
σ
(
1
)
,
x
σ
(
2
)
)
⋯
A
−
1
(
x
σ
(
2
N
−
1
)
,
x
σ
(
2
N
)
)
.
{\displaystyle {\begin{aligned}&{\frac {\displaystyle \int f(x_{1})\cdots f(x_{2N})\exp \left[{-\iint {\frac {1}{2}}A(x_{2N+1},x_{2N+2})f(x_{2N+1})f(x_{2N+2})\,d^{d}x_{2N+1}\,d^{d}x_{2N+2}}\right]{\mathcal {D}}f}{\displaystyle \int \exp \left[{-\iint {\frac {1}{2}}A(x_{2N+1},x_{2N+2})f(x_{2N+1})f(x_{2N+2})\,d^{d}x_{2N+1}\,d^{d}x_{2N+2}}\right]{\mathcal {D}}f}}\\[6pt]={}&{\frac {1}{2^{N}N!}}\sum _{\sigma \in S_{2N}}A^{-1}(x_{\sigma (1)},x_{\sigma (2)})\cdots A^{-1}(x_{\sigma (2N-1)},x_{\sigma (2N)}).\end{aligned}}}
In the DeWitt notation, the equation looks identical to the finite-dimensional case.
=== n-dimensional with linear term ===
If A is again a symmetric positive-definite matrix, then (assuming all are column vectors)
∫
exp
(
−
1
2
∑
i
,
j
=
1
n
A
i
j
x
i
x
j
+
∑
i
=
1
n
b
i
x
i
)
d
n
x
=
∫
exp
(
−
1
2
x
T
A
x
+
b
T
x
)
d
n
x
=
(
2
π
)
n
det
A
exp
(
1
2
b
T
A
−
1
b
)
.
{\displaystyle {\begin{aligned}\int \exp \left(-{\frac {1}{2}}\sum _{i,j=1}^{n}A_{ij}x_{i}x_{j}+\sum _{i=1}^{n}b_{i}x_{i}\right)d^{n}\mathbf {x} &=\int \exp \left(-{\tfrac {1}{2}}\mathbf {x} ^{\mathsf {T}}A\mathbf {x} +\mathbf {b} ^{\mathsf {T}}\mathbf {x} \right)d^{n}\mathbf {x} \\&={\sqrt {\frac {(2\pi )^{n}}{\det A}}}\exp \left({\tfrac {1}{2}}\mathbf {b} ^{\mathsf {T}}A^{-1}\mathbf {b} \right).\end{aligned}}}
=== Integrals of similar form ===
∫
0
∞
x
2
n
e
−
x
2
/
a
2
d
x
=
π
a
2
n
+
1
(
2
n
−
1
)
!
!
2
n
+
1
{\displaystyle \int _{0}^{\infty }x^{2n}e^{-{x^{2}}/{a^{2}}}\,dx={\sqrt {\pi }}{\frac {a^{2n+1}(2n-1)!!}{2^{n+1}}}}
∫
0
∞
x
2
n
+
1
e
−
x
2
/
a
2
d
x
=
n
!
2
a
2
n
+
2
{\displaystyle \int _{0}^{\infty }x^{2n+1}e^{-{x^{2}}/{a^{2}}}\,dx={\frac {n!}{2}}a^{2n+2}}
∫
0
∞
x
2
n
e
−
b
x
2
d
x
=
(
2
n
−
1
)
!
!
b
n
2
n
+
1
π
b
{\displaystyle \int _{0}^{\infty }x^{2n}e^{-bx^{2}}\,dx={\frac {(2n-1)!!}{b^{n}2^{n+1}}}{\sqrt {\frac {\pi }{b}}}}
∫
0
∞
x
2
n
+
1
e
−
b
x
2
d
x
=
n
!
2
b
n
+
1
{\displaystyle \int _{0}^{\infty }x^{2n+1}e^{-bx^{2}}\,dx={\frac {n!}{2b^{n+1}}}}
∫
0
∞
x
n
e
−
b
x
2
d
x
=
Γ
(
n
+
1
2
)
2
b
n
+
1
2
{\displaystyle \int _{0}^{\infty }x^{n}e^{-bx^{2}}\,dx={\frac {\Gamma ({\frac {n+1}{2}})}{2b^{\frac {n+1}{2}}}}}
where
n
{\displaystyle n}
is a positive integer
An easy way to derive these is by differentiating under the integral sign.
∫
−
∞
∞
x
2
n
e
−
α
x
2
d
x
=
(
−
1
)
n
∫
−
∞
∞
∂
n
∂
α
n
e
−
α
x
2
d
x
=
(
−
1
)
n
∂
n
∂
α
n
∫
−
∞
∞
e
−
α
x
2
d
x
=
π
(
−
1
)
n
∂
n
∂
α
n
α
−
1
2
=
π
α
(
2
n
−
1
)
!
!
(
2
α
)
n
{\displaystyle {\begin{aligned}\int _{-\infty }^{\infty }x^{2n}e^{-\alpha x^{2}}\,dx&=\left(-1\right)^{n}\int _{-\infty }^{\infty }{\frac {\partial ^{n}}{\partial \alpha ^{n}}}e^{-\alpha x^{2}}\,dx\\[1ex]&=\left(-1\right)^{n}{\frac {\partial ^{n}}{\partial \alpha ^{n}}}\int _{-\infty }^{\infty }e^{-\alpha x^{2}}\,dx\\[1ex]&={\sqrt {\pi }}\left(-1\right)^{n}{\frac {\partial ^{n}}{\partial \alpha ^{n}}}\alpha ^{-{\frac {1}{2}}}\\[1ex]&={\sqrt {\frac {\pi }{\alpha }}}{\frac {(2n-1)!!}{\left(2\alpha \right)^{n}}}\end{aligned}}}
One could also integrate by parts and find a recurrence relation to solve this.
=== Higher-order polynomials ===
Applying a linear change of basis shows that the integral of the exponential of a homogeneous polynomial in n variables may depend only on SL(n)-invariants of the polynomial. One such invariant is the discriminant,
zeros of which mark the singularities of the integral. However, the integral may also depend on other invariants.
Exponentials of other even polynomials can numerically be solved using series. These may be interpreted as formal calculations when there is no convergence. For example, the solution to the integral of the exponential of a quartic polynomial is
∫
−
∞
∞
e
a
x
4
+
b
x
3
+
c
x
2
+
d
x
+
f
d
x
=
1
2
e
f
∑
n
,
m
,
p
=
0
n
+
p
=
0
mod
2
∞
b
n
n
!
c
m
m
!
d
p
p
!
Γ
(
3
n
+
2
m
+
p
+
1
4
)
(
−
a
)
3
n
+
2
m
+
p
+
1
4
.
{\displaystyle \int _{-\infty }^{\infty }e^{ax^{4}+bx^{3}+cx^{2}+dx+f}\,dx={\frac {1}{2}}e^{f}\sum _{\begin{smallmatrix}n,m,p=0\\n+p=0{\bmod {2}}\end{smallmatrix}}^{\infty }{\frac {b^{n}}{n!}}{\frac {c^{m}}{m!}}{\frac {d^{p}}{p!}}{\frac {\Gamma {\left({\frac {3n+2m+p+1}{4}}\right)}}{{\left(-a\right)}^{\frac {3n+2m+p+1}{4}}}}.}
The n + p = 0 mod 2 requirement is because the integral from −∞ to 0 contributes a factor of (−1)n+p/2 to each term, while the integral from 0 to +∞ contributes a factor of 1/2 to each term. These integrals turn up in subjects such as quantum field theory.
== See also ==
List of integrals of Gaussian functions
Common integrals in quantum field theory
Normal distribution
List of integrals of exponential functions
Error function
Berezin integral
== References ==
=== Citations ===
=== Sources === | Wikipedia/Integration_of_the_normal_density_function |
The Beverton–Holt model is a classic discrete-time population model which gives the expected number n t+1 (or density) of individuals in generation t + 1 as a function of the number of individuals in the previous generation,
n
t
+
1
=
R
0
n
t
1
+
n
t
/
M
.
{\displaystyle n_{t+1}={\frac {R_{0}n_{t}}{1+n_{t}/M}}.}
Here R0 is interpreted as the proliferation rate per generation and K = (R0 − 1) M is the carrying capacity of the environment. The Beverton–Holt model was introduced in the context of fisheries by Beverton & Holt (1957). Subsequent work has derived the model under other assumptions such as contest competition (Brännström & Sumpter 2005), within-year resource limited competition (Geritz & Kisdi 2004) or even as the outcome of a source-sink Malthusian patches linked by density-dependent dispersal (Bravo de la Parra et al. 2013). The Beverton–Holt model can be generalized to include scramble competition (see the Ricker model, the Hassell model and the Maynard Smith–Slatkin model). It is also possible to include a parameter reflecting the spatial clustering of individuals (see Brännström & Sumpter 2005).
Despite being nonlinear, the model can be solved explicitly, since it is in fact an inhomogeneous linear equation in 1/n.
The solution is
n
t
=
K
n
0
n
0
+
(
K
−
n
0
)
R
0
−
t
.
{\displaystyle n_{t}={\frac {Kn_{0}}{n_{0}+(K-n_{0})R_{0}^{-t}}}.}
Because of this structure, the model can be considered as the discrete-time analogue of the continuous-time logistic equation for population growth introduced by Verhulst; for comparison, the logistic equation is
d
N
d
t
=
r
N
(
1
−
N
K
)
,
{\displaystyle {\frac {dN}{dt}}=rN\left(1-{\frac {N}{K}}\right),}
and its solution is
N
(
t
)
=
K
N
(
0
)
N
(
0
)
+
(
K
−
N
(
0
)
)
e
−
r
t
.
{\displaystyle N(t)={\frac {KN(0)}{N(0)+(K-N(0))e^{-rt}}}.}
== References ==
Beverton, R. J. H.; Holt, S. J. (1957), On the Dynamics of Exploited Fish Populations, Fishery Investigations Series II Volume XIX, Ministry of Agriculture, Fisheries and Food
Brännström, Åke; Sumpter, David J. T. (2005), "The role of competition and clustering in population dynamics" (PDF), Proc. R. Soc. B, vol. 272, no. 1576, pp. 2065–2072, doi:10.1098/rspb.2005.3185, PMC 1559893, PMID 16191618
Bravo de la Parra, R.; Marvá, M.; Sánchez, E.; Sanz, L. (2013), "Reduction of discrete dynamical systems with applications to dynamics population models" (PDF), Math Model Nat Phenom, vol. 8, no. 6, pp. 107–129
Geritz, Stefan A. H.; Kisdi, Éva (2004), "On the mechanistic underpinning of discrete-time population models with complex dynamics", J. Theor. Biol., vol. 228, no. 2, pp. 261–269, Bibcode:2004JThBi.228..261G, doi:10.1016/j.jtbi.2004.01.003, PMID 15094020
Ricker, W. E. (1954), "Stock and recruitment", J. Fisheries Res. Board Can., vol. 11, pp. 559–623 | Wikipedia/Beverton–Holt_model |
In queueing theory, a discipline within the mathematical theory of probability, Buzen's algorithm (or convolution algorithm) is an algorithm for calculating the normalization constant G(N) in the Gordon–Newell theorem. This method was first proposed by Jeffrey P. Buzen in his 1971 PhD dissertation and subsequently published in a refereed journal in 1973. Computing G(N) is required to compute the stationary probability distribution of a closed queueing network.
Performing a naïve computation of the normalizing constant requires enumeration of all states. For a closed network with N circulating customers and M service facilities, G(N) is the sum of
(
N
+
M
−
1
M
−
1
)
{\displaystyle {\tbinom {N+M-1}{M-1}}}
individual terms, with each term consisting of M factors raised to powers whose sum is N. Buzen's algorithm computes G(N) using only NM multiplications and NM additions. This dramatic improvement opened the door to applying the Gordon-Newell theorem to models of real world computer systems as well as flexible manufacturing systems and other cases where bottlenecks and queues can form within networks of inter-connected service facilities. The values of G(1), G(2) ... G(N -1), which can be used to calculate other important quantities of interest, are computed as by-products of the algorithm.
== Problem setup ==
Consider a closed queueing network with M service facilities and N circulating customers. Assume that the service time for a customer at service facility i is given by an exponentially distributed random variable with parameter μi and that, after completing service at service facility i, a customer will proceed next to service facility j with probability pij.
Let
P
(
n
1
,
n
2
,
⋯
,
n
M
)
{\displaystyle \mathbb {P} (n_{1},n_{2},\cdots ,n_{M})}
be the steady state probability that the number of customers at service facility i is equal to ni for i = 1, 2, ... , M . It follows from the Gordon–Newell theorem that
P
(
n
1
,
n
2
,
⋯
,
n
M
)
=
1
G
(
N
)
{\displaystyle \mathbb {P} (n_{1},n_{2},\cdots ,n_{M})={\frac {1}{{\text{G}}(N)}}}
(
X
1
)
n
1
{\displaystyle \left(X_{1}\right)^{n_{1}}}
(
X
2
)
n
2
{\displaystyle \left(X_{2}\right)^{n_{2}}}
....
(
X
M
)
n
M
{\displaystyle \left(X_{M}\right)^{n_{M}}}
This result is usually written more compactly as
P
(
n
1
,
n
2
,
⋯
,
n
M
)
=
1
G
(
N
)
∏
i
=
1
M
(
X
i
)
n
i
{\displaystyle \mathbb {P} (n_{1},n_{2},\cdots ,n_{M})={\frac {1}{{\text{G}}(N)}}\prod _{i=1}^{M}\left(X_{i}\right)^{n_{i}}}
The values of Xi are determined by solving
μ
j
X
j
=
∑
i
=
1
M
μ
i
X
i
p
i
j
for
j
=
1
,
…
,
M
.
{\displaystyle \mu _{j}X_{j}=\sum _{i=1}^{M}\mu _{i}X_{i}p_{ij}\quad {\text{ for }}j=1,\ldots ,M.}
G(N) is a normalizing constant chosen so that the sum of all
(
N
+
M
−
1
M
−
1
)
{\displaystyle {\tbinom {N+M-1}{M-1}}}
values of
P
(
n
1
,
n
2
,
⋯
,
n
M
)
{\displaystyle \mathbb {P} (n_{1},n_{2},\cdots ,n_{M})}
is equal to 1. Buzen's algorithm represents the first efficient procedure for computing G(N).
== Algorithm description ==
The individual terms that must be added together to compute G(N) all have the following form:
(
X
1
)
n
1
{\displaystyle \left(X_{1}\right)^{n_{1}}}
(
X
2
)
n
2
{\displaystyle \left(X_{2}\right)^{n_{2}}}
....
(
X
M
)
n
M
{\displaystyle \left(X_{M}\right)^{n_{M}}}
. Note that this set of terms can be partitioned into two groups. The first group comprises all terms for which the exponent of
(
X
M
)
{\displaystyle \left(X_{M}\right)}
is greater than or equal to 1. This implies that
(
X
M
)
{\displaystyle \left(X_{M}\right)}
raised to the power 1 can be factored out of each of these terms.
After factoring out
(
X
M
)
{\displaystyle \left(X_{M}\right)}
, a surprising result emerges: the modified terms in the first group are identical to the terms used to compute the normalizing constant for the same network with one customer removed. Thus, the sum of the terms in the first group can be written as “XM times G(N -1)”. This insight provides the foundation for the development of the algorithm.
Next consider the second group. The exponent of XM for every term in this group is zero. As a result, service facility M effectively disappears from all terms in this group (since it reduces in every case to a factor of 1). This leaves the total number of customers at the remaining M -1 service facilities equal to N. The second group includes all possible arrangements of these N customers.
To express this concept precisely, assume that X1, X2, … XM have been obtained for a given network with M service facilities. For any n ≤ N and m ≤ M, define g(n,m) as the normalizing constant for a network with n customers, m service facilities (1,2, … m), and values of X1, X2, … Xm that match the first m members of the original sequence X1, X2, … XM .
Given this definition, the sum of the terms in the second group can now be written as g(N, M -1).
It also follows immediately that “XM times G(N -1)”, the sum of the terms in the first group, can be re-written as “XM times g(N -1,M )”.
In addition, the normalizing constant G(N) in the Gordon-Newell theorem can now be re-written as g(N,M).
Since G(N) is equal to the combined sum of the terms in the first and second groups,
G(N) = g(N, M ) = XM g(N -1,M ) + g(N,M -1)
This same recurrence relation clearly exists for any intermediate value of n from 1 to N, and for any intermediate value of m from 1 to M .
This implies g(n,m) = Xm g(n -1,m) + g(n,m -1). Buzen’s algorithm is simply the iterative application of this fundamental recurrence relation, along with the following boundary conditions.
g(0,m) = 1 for m = 1, 2, …M
g(n,1) = (Xi)n for n = 0, 1, … N
== Marginal distributions, expected number of customers ==
The Gordon-Newell theorem enables analysts to determine the stationary probability associated with each individual state of a closed queueing network. These individual probabilities must then be added together to evaluate other important probabilities. For example P(ni ≥ k), the probability that the total number of customers at service center i is greater than or equal to k, must be summed over all values of ni ≥ k and, for each such value of ni, over all possible ways the remaining N – ni customers can be distributed across the other M -1 service centers in the network.
Many of these marginal probabilities can be computed with minimal additional effort. This is easy to see for the case of P(ni ≥ k). Clearly, Xi must be raised to the power of k or higher in every state where the number of customers at service center i is greater than or equal to k. Thus Xi k can be factored out from each of these probabilities, leaving a set of modified probabilities whose sum is given by G(N-k)/G(N). This observation yields the following simple and highly efficient result:
P(ni ≥ k) = (Xi)k G(N-k)/G(N)
This relationship can then be used to compute the marginal distributions and expected number of customers at each service facility.
P
(
n
i
=
k
)
=
X
i
k
G
(
N
)
[
G
(
N
−
k
)
−
X
i
G
(
N
−
k
−
1
)
]
for
k
=
0
,
1
,
…
,
N
−
1
,
{\displaystyle \mathbb {P} (n_{i}=k)={\frac {X_{i}^{k}}{G(N)}}[G(N-k)-X_{i}G(N-k-1)]\quad {\text{ for }}k=0,1,\ldots ,N-1,}
P
(
n
i
=
N
)
=
X
i
N
G
(
N
)
.
{\displaystyle \mathbb {P} (n_{i}=N)={\frac {X_{i}^{N}}{G(N)}}.}
The expected number of customers at service facility i is given by
E
(
n
i
)
=
∑
k
=
1
N
X
i
k
G
(
N
−
k
)
G
(
N
)
.
{\displaystyle \mathbb {E} (n_{i})=\sum _{k=1}^{N}X_{i}^{k}{\frac {G(N-k)}{G(N)}}.}
These characterizations of quantities of interest in terms of the G(n) are also due to Buzen.
== Implementation ==
It will be assumed that the Xm have been computed by solving the relevant equations and are available as an input to our routine. Although g(n,m) is in principle a two dimensional matrix, it can be computed in a column by column fashion starting from the top of the leftmost column and running down each column to the bottom before proceeding to the next column on the right. The routine uses a single column vector C to represent the current column of g.
The first loop in the algorithm below initializes the column vector C[n] so that C[0] = 1 and C(n) = 0 for n≥1. Note that C[0] remains equal to 1 throughout all subsequent iterations.
In the second loop, each successive value of C(n) for n≥1 is set equal to the corresponding value of g(n,m) as the algorithm proceeds down column m. This is achieved by setting each successive value of C(n) equal to:
g(n,m-1) plus Xm times g(n-1,m).
Note that g(n,m-1) is the previous value of C(n), and g(n-1,m) is the current value of C(n-1)
At completion, the final values of C[n] correspond to column M in the matrix g(n,m). Thus they represent the desired values G(0), G(1), ... , G(N).
== References ==
Jain: The Convolution Algorithm (class handout)
Menasce: Convolution Approach to Queueing Algorithms (presentation) | Wikipedia/Buzen's_algorithm |
Word error rate (WER) is a common metric of the performance of a speech recognition or machine translation system. The WER metric typically ranges from 0 to 1, where 0 indicates that the compared pieces of text are exactly identical, and 1 (or larger) indicates that they are completely different with no similarity. This way, a WER of 0.8 means that there is an 80% error rate for compared sentences.
The general difficulty of measuring performance lies in the fact that the recognized word sequence can have a different length from the reference word sequence (supposedly the correct one). The WER is derived from the Levenshtein distance, working at the word level instead of the phoneme level. The WER is a valuable tool for comparing different systems as well as for evaluating improvements within one system. This kind of measurement, however, provides no details on the nature of translation errors and further work is therefore required to identify the main source(s) of error and to focus any research effort.
This problem is solved by first aligning the recognized word sequence with the reference (spoken) word sequence using dynamic string alignment. Examination of this issue is seen through a theory called the power law that states the correlation between perplexity and word error rate.
Word error rate can then be computed as:
W
E
R
=
S
+
D
+
I
N
=
S
+
D
+
I
S
+
D
+
C
{\displaystyle {\mathit {WER}}={\frac {S+D+I}{N}}={\frac {S+D+I}{S+D+C}}}
where
S is the number of substitutions,
D is the number of deletions,
I is the number of insertions,
C is the number of correct words,
N is the number of words in the reference (N=S+D+C)
The intuition behind 'deletion' and 'insertion' is how to get from the reference to the hypothesis. So if we have the reference "This is wikipedia" and hypothesis "This _ wikipedia", we call it a deletion.
Note that since N is the number of words in the reference, the word error rate can be larger than 1.0, namely if the number of insertions I is larger than the number of correct words C.
When reporting the performance of a speech recognition system, sometimes word accuracy (WAcc) is used instead:
W
A
c
c
=
1
−
W
E
R
=
N
−
S
−
D
−
I
N
=
C
−
I
N
{\displaystyle {\mathit {WAcc}}=1-{\mathit {WER}}={\frac {N-S-D-I}{N}}={\frac {C-I}{N}}}
Since the WER can be larger than 1.0, the word accuracy can be smaller than 0.0.
== Experiments ==
It is commonly believed that a lower word error rate shows superior accuracy in recognition of speech, compared with a higher word error rate. However, at least one study has shown that this may not be true. In a Microsoft Research experiment, it was shown that, if people were trained under "that matches the optimization objective for understanding", (Wang, Acero and Chelba, 2003) they would show a higher accuracy in understanding of language than other people who demonstrated a lower word error rate, showing that true understanding of spoken language relies on more than just high word recognition accuracy.
== Other metrics ==
One problem with using a generic formula such as the one above, however, is that no account is taken of the effect that different types of error may have on the likelihood of successful outcome, e.g. some errors may be more disruptive than others and some may be corrected more easily than others. These factors are likely to be specific to the syntax being tested. A further problem is that, even with the best alignment, the formula cannot distinguish a substitution error from a combined deletion plus insertion error.
Hunt (1990) has proposed the use of a weighted measure of performance accuracy where errors of substitution are weighted at unity but errors of deletion and insertion are both weighted only at 0.5, thus:
W
E
R
=
S
+
0.5
D
+
0.5
I
N
{\displaystyle {\mathit {WER}}={\frac {S+0.5D+0.5I}{N}}}
There is some debate, however, as to whether Hunt's formula may properly be used to assess the performance of a single system, as it was developed as a means of comparing more fairly competing candidate systems. A further complication is added by whether a given syntax allows for error correction and, if it does, how easy that process is for the user. There is thus some merit to the argument that performance metrics should be developed to suit the particular system being measured.
Whichever metric is used, however, one major theoretical problem in assessing the performance of a system is deciding whether a word has been “mis-pronounced,” i.e. does the fault lie with the user or with the recogniser. This may be particularly relevant in a system which is designed to cope with non-native speakers of a given language or with strong regional accents.
The pace at which words should be spoken during the measurement process is also a source of variability between subjects, as is the need for subjects to rest or take a breath. All such factors may need to be controlled in some way.
For text dictation it is generally agreed that performance accuracy at a rate below 95% is not acceptable, but this again may be syntax and/or domain specific, e.g. whether there is time pressure on users to complete the task, whether there are alternative methods of completion, and so on.
The term "Single Word Error Rate" is sometimes referred to as the percentage of incorrect recognitions for each different word in the system vocabulary.
== Edit distance ==
The word error rate may also be referred to as the length normalized edit distance. The normalized edit distance between X and Y, d( X, Y ) is defined as the minimum of W( P ) / L ( P ), where P is an editing path between X and Y, W ( P ) is the sum of the weights of the elementary edit operations of P, and L(P) is the number of these operations (length of P).
== See also ==
BLEU
F-Measure
METEOR
NIST (metric)
ROUGE (metric)
== References ==
=== Notes ===
=== Other sources ===
McCowan et al. 2005: On the Use of Information Retrieval Measures for Speech Recognition Evaluation Archived 2019-02-24 at the Wayback Machine
Hunt, M.J., 1990: Figures of Merit for Assessing Connected Word Recognisers (Speech Communication, 9, 1990, pp 239-336)
Zechner, K., Waibel, A.Minimizing Word Error Rate in Textual Summaries of Spoken Language | Wikipedia/Word_error_rate |
Calligraphy (from Ancient Greek καλλιγραφία (kalligraphía) 'beautiful writing') is a visual art related to writing. It is the design and execution of lettering with a pen, ink brush, or other writing instruments.: 17 Contemporary calligraphic practice can be defined as "the art of giving form to signs in an expressive, harmonious, and skillful manner".: 18
In East Asia and the Islamic world, where written forms allow for greater flexibility, calligraphy is regarded as a significant art form, and the form it takes may be affected by the meaning of the text or the individual words.
Modern Western calligraphy ranges from functional inscriptions and designs to fine-art pieces where the legibility of letters varies. Classical calligraphy differs from type design and non-classical hand-lettering, though a calligrapher may practice both.
Western calligraphy continues to flourish in the forms of wedding invitations and event invitations, font design and typography, original hand-lettered logo design, religious art, announcements, graphic design and commissioned calligraphic art, cut stone inscriptions, and memorial documents. It is also used for props, moving images for film and television, testimonials, birth and death certificates, maps, and other written works.
== Tools ==
=== Pens and brushes ===
The principal tools for a calligrapher are the pen and the brush. The pens used in calligraphy can have nibs that may be flat, round, or pointed. For decorative purposes, multi-nibbed pens (steel brushes) can be used. However, works have also been created with felt-tip and ballpoint pens, although these works do not employ angled lines. There are certain styles of calligraphy, such as Gothic script, that require a stub nib pen.
Common calligraphy pens and brushes include:
Quill
Dip pen
Ink brush
Qalam
Fountain pen
Chiselled marker
Reed pen
=== Inks, papers, and templates ===
The ink used for writing is usually water-based and is much less viscous than the oil-based ink used in printing. Certain specialty paper with high ink absorption and constant texture enables cleaner lines, although parchment or vellum is often used, as a knife can be used to erase imperfections and a light-box is not needed to allow lines to be visible through it. Normally, light boxes and templates are used to achieve straight lines without pencil markings detracting from the work. Ruled paper, either for a light box or direct use, is most often ruled every quarter or half an inch, although inch spaces are occasionally used. This is the case with Uncial script (hence the name "litterea unciales"; which roughly translates to 'inch high letters'), and college-ruled paper often acts as a guideline well.
== East Asia ==
Chinese calligraphy is locally called shūfǎ or fǎshū (書法 or 法書 in traditional Chinese, literally "the method or law of writing"); Japanese calligraphy is shodō (書道, literally "the way or principle of writing"); and Korean calligraphy is called seoye (Korean: 서예; Hanja: 書藝; literally "the art of writing"); The calligraphy of East Asian characters continues to form an important and appreciated constituent of contemporary traditional East Asian culture.
=== History ===
In ancient China, the oldest known Chinese characters are oracle bone script (甲骨文), carved on ox scapulae and tortoise plastrons, as the rulers in the Shang dynasty carved pits on such animals' bones and then baked them to gain auspice of military affairs, agricultural harvest, or even procreation and weather. During the divination ceremony, after the cracks were made, the characters were written with a brush on the shell or bone to be later carved. With the development of the bronzeware script (jīn wén) and large seal script (dà zhuàn) "cursive" signs continued. Mao Gong ding is one of the most famous examples of bronzeware script in Chinese calligraphic history. It contains 500 inscribed characters, the largest number of bronze inscriptions discovered to date. Moreover, each archaic kingdom of current China had its own set of characters.
In Imperial China, the graphs on old steles – some dating from 200 BCE, and in the small seal script (小篆 xiǎo zhuàn) style – have been preserved and can be viewed in museums even today.
About 220 BCE, the emperor Qin Shi Huang, the first to conquer the entire Chinese basin, imposed several reforms, among them Li Si's character unification, which created a set of 3300 standardized small seal characters. Despite the fact that the main writing implement of the time was already the brush, few papers survive from this period, and the main examples of this style are on steles.
The clerical script (隸書/隸书) (lì shū) which was more regularized, and in some ways similar to modern text, was also authorised under Qin Shi Huang.
Between clerical script and traditional regular script, there is another transitional type of calligraphic work called Wei Bei. It started during the North and South dynasties (420 to 589 CE) and ended before the Tang dynasty (618–907).
The traditional regular script (kǎi shū), still in use today, and largely finalized by Zhong You (鐘繇, 151–230) and his followers, is even more regularized. Its spread was encouraged by Emperor Mingzong of Later Tang (926–933), who ordered the printing of the classics using new wooden blocks in kaishu. Printing technologies here allowed a shape stabilization. The kaishu shape of characters 1000 years ago was mostly similar to that at the end of Imperial China; However, small changes to the characters have been made. For example the shape of 广 has changed from the version in the Kangxi Dictionary of 1716 to the version found in modern books. The Kangxi and current shapes have tiny differences, while stroke order remains the same, according to the old style.
Styles which did not survive include bāfēnshū, a mix of 80% small seal script and 20% clerical script. Some variant Chinese characters were unorthodox or locally used for centuries. They were generally understood but always rejected in official texts. Some of these unorthodox variants, in addition to some newly created characters, compose the simplified Chinese character set.
=== Technique ===
Traditional East Asian writing uses the Four Treasures of the Study – ink brushes known as máobǐ (毛筆/毛笔), Chinese ink, paper, and inkstones – to write Chinese characters. These instruments of writing are also known as the Four Friends of the Study (Korean: 문방사우/文房四友, romanized: Munbang sau) in Korea. Besides the traditional four tools, desk pads and paperweights are also used.
Many different parameters influence the final result of a calligrapher's work. Physical parameters include the shape, size, stretch, and hair type of the ink brush; the color, color density and water density of the ink; as well as the paper's water absorption speed and surface texture. The calligrapher's technique also influences the result, as the look of finished characters are influenced by the quantity of ink and water the calligrapher lets the brush absorb and by the pressure, inclination, and direction of the brush. Changing these variables produces thinner or bolder strokes, and smooth or toothed borders. Eventually, the speed, accelerations and decelerations of a skilled calligrapher's movements aim to give "spirit" to the characters, greatly influencing their final shapes.
=== Styles ===
Cursive styles such as xíngshū (行書/行书)(semi-cursive or running script) and cǎoshū (草書/草书)(cursive, rough script, or grass script) are less constrained and faster, where movements made by the writing implement are more visible. These styles' stroke orders vary more, sometimes creating radically different forms. They are descended from the clerical script, in the same time as the regular script (Han dynasty), but xíngshū and cǎoshū were used for personal notes only, and never used as a standard. The cǎoshū style was highly appreciated during Emperor Wu of Han's reign (140–187 CE).
Examples of modern printed styles are Song from the Song dynasty's printing press, and sans-serif. These are not considered traditional styles, and are normally not written.
=== Influences ===
Japanese and Korean calligraphy were each greatly influenced by Chinese calligraphy. Calligraphy has influenced most major art styles in East Asia, including ink and wash painting, a style of Chinese, Japanese, and Korean painting based entirely on calligraphy and which uses similar tools and techniques.
The Japanese and Koreans have also developed their own specific sensibilities and styles of calligraphy while incorporating Chinese influences.
=== Japan ===
Japanese calligraphy goes out of the set of CJK strokes to also include local alphabets such as hiragana and katakana, with specific problematics such as new curves and moves, and specific materials (Japanese paper, washi 和紙, and Japanese ink).
=== Korea ===
The modern Korean alphabet and its use of the circle required the creation of a new technique not used in traditional Chinese calligraphy.
=== Mongolia ===
Mongolian calligraphy is also influenced by Chinese calligraphy, from tools to style.
=== Tibet ===
Tibetan calligraphy is central to Tibetan culture. The script is derived from Indic scripts. The nobles of Tibet, such as the High Lamas and inhabitants of the Potala Palace, were often capable calligraphers. Tibet has been a center of Buddhism for several centuries, with said religion placing a high significance on the written word. This does not provide for a large body of secular pieces, although they do exist (but are usually related in some way to Tibetan Buddhism). Almost all high religious writing involved calligraphy, including letters sent by the Dalai Lama and other religious and secular authorities. Calligraphy is particularly evident on their prayer wheels, although this calligraphy was forged rather than scribed, much like Arab and Roman calligraphy is often found on buildings. Although originally done with a reed, Tibetan calligraphers now use chisel tipped pens and markers as well.
== Southeast Asia ==
=== Philippines ===
The Philippines has numerous ancient and indigenous scripts collectively called Suyat scripts. Various ethno-linguistic groups in the Philippines prior to Spanish colonization in the 16th century and up to the independence era in the 21st century have used the scripts with various mediums. By the end of colonialism, only four of the suyat scripts had survived and continued to be used by certain communities in everyday life. These four scripts are Hanunó'o/Hanunoo of the Hanuno'o Mangyan people, Buhid/Build of the Buhid Mangyan people, Tagbanwa script of the Tagbanwa people, and Palaw'an/Pala'wan of the Palaw'an people. All four scripts were inscribed in the UNESCO Memory of the World international register, under the name Philippine Paleographs (Hanunoo, Build, Tagbanua and Pala’wan), in 1999.
Due to dissent from colonialism, many artists and cultural experts have revived the usage of suyat scripts that went extinct due their replacement by the Spanish-introduced Latin alphabet. The scripts being revived include the Kulitan script of the Kapampangan people, the badlit script of various Visayan ethnic groups, the Iniskaya script of the Eskaya people, the Baybayin script of the Tagalog people, and the Kur-itan script of the Ilocano people, among many others. Due to the diversity of suyat scripts, all calligraphy written in suyat script are collectively called Filipino suyat calligraphy, although each are distinct from each other. Calligraphy using the Western alphabet and the Arabic alphabet are also prevalent in the Philippines due to its colonial past. However, the Western and Arabic alphabets are not considered suyat, and therefore such calligraphy is not considered suyat calligraphy.
=== Vietnam ===
Vietnamese calligraphy is called thư pháp (書法, literally "the way of letters or words") and is based on Chữ Nôm and Chữ Hán, the historical Vietnamese writing system rooted in the impact of Chinese characters and replaced with the Latin alphabet as a result of French colonial influence. Calligraphic traditions maintaining the historical employment of Han characters continue to be preserved in modern Vietnamese calligraphy.
== South Asia ==
The preservation of religious texts is the most common purpose for Indian calligraphy. Monastic Buddhist communities had members trained in calligraphy and shared responsibility for duplicating sacred scriptures. Jaina traders incorporated illustrated manuscripts celebrating Jaina saints. These manuscripts were produced using inexpensive material, like palm leaves and birch, with fine calligraphy.
=== Nepal ===
Nepalese calligraphy is primarily created using the Ranjana script. The script itself, along with its derivatives (like Lantsa, Phagpa, Kutila) are used in Nepal, Tibet, Bhutan, Leh, Mongolia, coastal Japan, and Korea to write "Om mani padme hum" and other sacred Buddhist texts, mainly those derived from Sanskrit and Pali.
== Africa ==
=== Egypt ===
Egyptian hieroglyphs were the formal writing system used in Ancient Egypt. Hieroglyphs combined logographic, syllabic and alphabetic elements, with a total of some 1,000 distinct characters.
=== Ethiopia ===
Ethiopian (Abyssinian) calligraphy began with the Ge'ez script, which replaced Epigraphic South Arabian in the Kingdom of Aksum, which was developed specifically for Ethiopian Semitic languages. In those languages that use it, such as Amharic and Tigrinya, the script is called Fidäl, which means 'script' or 'alphabet'. The Epigraphic South Arabian letters were used for a few inscriptions into the 8th century, though not in any South Arabian language since Dʿmt.
Early inscriptions in Ge'ez and Ge'ez script are dated to as early as the 5th century BCE, with a sort of proto-Ge'ez written in ESA since the 9th century BCE. Ge'ez literature begins with the Christianization of Ethiopia (and the civilization of Axum) in the 4th century, during the reign of Ezana of Axum.
The Ge'ez script is read from left to right and has been adapted to write other languages, usually ones that are also Semitic. The most widespread use is for Amharic in Ethiopia and Tigrinya in Eritrea and Ethiopia.
== Americas ==
=== Maya ===
Maya calligraphy was expressed via Maya glyphs; modern Maya calligraphy is mainly used on seals and monuments in the Yucatán Peninsula in Mexico. Maya glyphs are rarely used in government offices; however, in Campeche, Yucatán and Quintana Roo, calligraphy in Maya languages is written in Latin script rather than Maya glyphs. Some commercial companies in southern Mexico use Maya glyphs as symbols of their business. Some community associations and modern Maya brotherhoods use Maya glyphs as symbols of their groups.
Most of the archaeological sites in Mexico such as Chichen Itza, Labna, Uxmal, Edzna, Calakmul, etc. have glyphs in their structures. Carved stone monuments known as stele are common sources of ancient Maya calligraphy.
== Europe ==
Calligraphy in Europe is recognizable in the use of the Latin script in Western Europe, and in the use of the Greek, Armenian, and Georgian, and Cyrillic scripts in Eastern Europe.
=== Ancient Rome ===
The Latin alphabet appeared about 600 BCE in ancient Rome, and by the first century CE it had developed into Roman imperial capitals carved on stones, rustic capitals painted on walls, and Roman cursive for daily use. In the second and third centuries the uncial lettering style developed. As writing withdrew to monasteries, uncial script was found more suitable for copying the Bible and other religious texts. It was the monasteries which preserved calligraphic traditions during the fourth and fifth centuries, when the Roman Empire fell and Europe entered the early Middle Ages.
At the height of the Roman Empire, its power reached as far as Great Britain; when the empire fell, its literary influence remained. The Semi-uncial generated the Irish Semi-uncial, the small Anglo-Saxon. Each region developed its own standards following the main monastery of the region (i.e. Merovingian script, Laon script, Luxeuil script, Visigothic script, Beneventan script), which are mostly cursive and hardly readable.
=== Western Christendom ===
Christian churches promoted the development of writing through the prolific copying of the Bible, the Breviary, and other sacred texts. Two distinct styles of writing known as uncial and half-uncial (from the Latin uncia, or "inch") developed from a variety of Roman bookhands. The 7th–9th centuries in northern Europe were the heyday of Celtic illuminated manuscripts, such as the Book of Durrow, Lindisfarne Gospels and the Book of Kells.
Charlemagne's devotion to improved scholarship resulted in the recruiting of "a crowd of scribes", according to Alcuin, the Abbot of York. Alcuin developed the style known as the Caroline or Carolingian minuscule. The first manuscript in this hand was the Godescalc Evangelistary (finished 783) – a Gospel book written by the scribe Godescalc. Carolingian remains the one progenitor hand from which modern booktype descends.
In the eleventh century, the Caroline evolved into the blackletter ("Gothic") script, which was more compact and made it possible to fit more text on a page.: 72 The Gothic calligraphy styles became dominant throughout Europe and, in 1454, when Johannes Gutenberg developed the first printing press in Mainz, Germany, the Gothic style was adopted for its use, making it the first typeface.: 141
In the 15th century, the rediscovery of old Carolingian texts encouraged the creation of the humanist minuscule or littera antiqua. The 17th century saw the Batarde script from France, and the 18th century saw the English script spread across Europe and world through their books.
In the mid-1600s French officials, flooded with documents written in various hands and varied levels of skill, complained that many such documents were beyond their ability to decipher. The Office of the Financier thereupon restricted all legal documents to three hands, namely the Coulee, the Rhonde, (known as Round hand in English) and a Speed Hand sometimes called the Bastarda.
While there were many great French masters at the time, the most influential in proposing these hands was Louis Barbedor, who published Les Ecritures Financière Et Italienne Bastarde Dans Leur Naturel c. 1650.
With the destruction of the Camera Apostolica during the sack of Rome (1527), the capitol for writing masters moved to Southern France. By 1600, the Italic Cursiva began to be replaced by a technological refinement, the Italic Chancery Circumflessa, which in turn fathered the Rhonde and later English Roundhand.
In England, Ayres and Banson popularized the Round Hand while Snell is noted for his reaction to them, and warnings of restraint and proportionality. Still Edward Crocker began publishing his copybooks 40 years before the aforementioned.
=== Eastern Europe ===
Other European styles use the same tools and practices, but differ by character set and stylistic preferences.
For Slavonic lettering, the history of the Slavonic and consequently Russian writing systems differs fundamentally from that of the Latin language, having evolved from the 10th century to today.
==== Style ====
Unlike a typeface, handwritten calligraphy is characterised by irregularity in the characters which vary in size, shape, style, and color, producing a distinct aesthetic value, although it may also make the content more difficult to decode for some readers. As with Chinese or Islamic calligraphy, Western calligraphic script employed the use of strict rules and shapes. Quality writing had a rhythm and regularity to the letters, with a "geometrical" order of the lines on the page. Each character had, and often still has, a precise stroke order.
Sacred Western calligraphy has some unique features, such as the illumination of the first letter of each book or chapter in medieval times. A decorative "carpet page" may precede the literature, filled with ornate, geometrical depictions of bold-hued animals. The Lindisfarne Gospels (715–720 CE) are an early example. Many of the themes and variations of today's contemporary Western calligraphy are found in the pages of The Saint John's Bible. A particularly modern example is Timothy Botts' illustrated edition of the Bible, with 360 calligraphic images as well as a calligraphy typeface.
== Islamic world ==
Islamic calligraphy has evolved alongside Islam and the Arabic language. As it is based on Arabic letters, some call it "Arabic calligraphy". However the term "Islamic calligraphy" is a more appropriate term as it comprises all works of calligraphy by Muslim calligraphers of different national cultures, such as Persian or Ottoman calligraphy, from Al-Andalus in medieval Spain to China.
Islamic calligraphy is associated with geometric Islamic art (Arabesque) on the walls and ceilings of mosques as well as on the page or other materials. Contemporary artists in the Islamic world may draw on the heritage of calligraphy to create modern calligraphic inscriptions, like corporate logos, or abstractions.
Instead of recalling something related to the spoken word, calligraphy for Muslims is a visible expression of the highest art of all, the art of the spiritual world. Calligraphy has arguably become the most venerated form of Islamic art because it provides a link between the languages of the Muslims with the religion of Islam. The Qur'an has played an important role in the development and evolution of the Arabic language, and by extension, calligraphy in the Arabic alphabet. Proverbs and passages from the Qur'an continue to be sources for Islamic calligraphy.
During the Ottoman civilization, Islamic calligraphy attained special prominence. The city of Istanbul is an open exhibition hall for all kinds and varieties of calligraphy, from inscriptions in mosques to fountains, schools, houses, etc.
=== Antiquity ===
It is believed that ancient Persian script was invented by about 600–500 BCE to provide monument inscriptions for the Achaemenid kings. These scripts consisted of horizontal, vertical, and diagonal nail-shape letters, which is why it is called cuneiform script (lit. "script of nails") (khat-e-mikhi) in Persian. Centuries later, other scripts such as "Pahlavi" and "Avestan" scripts were used in ancient Persia. Pahlavi was a middle Persian script developed from the Aramaic script and became the official script of the Sassanian empire (224–651 CE).
=== Contemporary scripts ===
The Nasta'liq style is the most popular contemporary style among classical Persian calligraphy scripts; Persian calligraphers call it the "bride of calligraphy scripts." This calligraphy style has been based on such a rigid structure that it has changed very little since Mir Ali Tabrizi had found the optimum composition of the letters and graphical rules. It has just been fine-tuned during the past seven centuries. It has very strict rules for graphical shape of the letters and for combination of the letters, words, and composition of the whole calligraphy piece.
== Modern calligraphy ==
=== Revival ===
After printing became ubiquitous from the 15th century onward, the production of illuminated manuscripts began to decline. However, the rise of printing did not mean the end of calligraphy. A clear distinction between handwriting and more elaborate forms of lettering and script began to make its way into manuscripts and books at the beginning of the 16th century.
The modern revival of calligraphy began at the end of the 19th century, influenced by the aesthetics and philosophy of William Morris and the Arts and Crafts movement. Edward Johnston is regarded as being the father of modern calligraphy. After studying published copies of manuscripts by architect William Harrison Cowlishaw, he was introduced to William Lethaby in 1898, principal of the Central School of Arts and Crafts, who advised him to study manuscripts at the British Museum.
This triggered Johnston's interest in the art of calligraphy with the use of a broad-edged pen. He began a teaching course in calligraphy at the Central School in Southampton Row, London from September 1899, where he influenced the typeface designer and sculptor Eric Gill. He was commissioned by Frank Pick to design a new typeface for London Underground, still used today (with minor modifications).
He has been credited for single-handedly reviving the art of modern penmanship and lettering through his books and teachings – his handbook on the subject, Writing & Illuminating, & Lettering (1906) was particularly influential on a generation of British typographers and calligraphers, including Graily Hewitt, Stanley Morison, Eric Gill, Alfred Fairbank and Anna Simons. Johnston also devised the crafted round calligraphic handwriting style, written with a broad pen, known today as the Foundational hand. Johnston initially taught his students an uncial hand using a flat pen angle, but later taught his hand using a slanted pen angle. He first referred to this hand as "Foundational Hand" in his 1909 publication, Manuscript & Inscription Letters for Schools and Classes and for the Use of Craftsmen.
=== Subsequent developments ===
Graily Hewitt taught at the Central School of Arts and Crafts and published together with Johnston throughout the early part of the century. Hewitt was central to the revival of gilding in calligraphy, and his prolific output on type design also appeared between 1915 and 1943. He is attributed with the revival of gilding with gesso and gold leaf on vellum. Hewitt helped found the Society of Scribes & Illuminators (SSI) in 1921, probably the world's foremost calligraphy society.
Hewitt is not without both critics and supporters in his rendering of Cennino Cennini's medieval gesso recipes. Donald Jackson, a British calligrapher, has sourced his gesso recipes from earlier centuries, a number of which are not presently in English translation. Graily Hewitt created the patent announcing the award to Prince Philip of the title of Duke of Edinburgh on November 19, 1947, the day before his marriage to Queen Elizabeth.
Anna Simons, Johnston's pupil, was instrumental in sparking interest in calligraphy in Germany with her German translation of Writing and Illuminating, and Lettering in 1910. Austrian Rudolf Larisch, a teacher of lettering at the Vienna School of Art, published six lettering books that greatly influenced German-speaking calligraphers. Because German-speaking countries had not abandoned the Gothic hand in printing, Gothic also had a powerful effect on their styles.
Rudolf Koch was a friend and younger contemporary of Larisch. Koch's books, type designs, and teaching made him one of the most influential calligraphers of the 20th century in northern Europe and later in the U.S. Larisch and Koch taught and inspired many European calligraphers, notably Karlgeorg Hoefer and Hermann Zapf.
Contemporary typefaces used by computers, from word processors like Microsoft Word or Apple Pages to professional design software packages like Adobe InDesign, find their roots in both the calligraphy of the past as well as several professional typeface designers.
== See also ==
== Notes ==
== References ==
=== Works cited ===
== External links ==
Calligraphy alphabets, a list of major historical scripts (simplified version) at Lettering Daily
French Renaissance Paleography This is a scholarly maintained site that presents over 100 carefully selected French manuscripts from 1300 to 1700, with tools to decipher and transcribe them. | Wikipedia/Calligraphic |
In mathematics, an additive set function is a function
μ
\mu
mapping sets to numbers, with the property that its value on a union of two disjoint sets equals the sum of its values on these sets, namely,
μ
(
A
∪
B
)
=
μ
(
A
)
+
μ
(
B
)
.
{\textstyle \mu (A\cup B)=\mu (A)+\mu (B).}
If this additivity property holds for any two sets, then it also holds for any finite number of sets, namely, the function value on the union of k disjoint sets (where k is a finite number) equals the sum of its values on the sets. Therefore, an additive set function is also called a finitely additive set function (the terms are equivalent). However, a finitely additive set function might not have the additivity property for a union of an infinite number of sets. A σ-additive set function is a function that has the additivity property even for countably infinite many sets, that is,
μ
(
⋃
n
=
1
∞
A
n
)
=
∑
n
=
1
∞
μ
(
A
n
)
.
{\textstyle \mu \left(\bigcup _{n=1}^{\infty }A_{n}\right)=\sum _{n=1}^{\infty }\mu (A_{n}).}
Additivity and sigma-additivity are particularly important properties of measures. They are abstractions of how intuitive properties of size (length, area, volume) of a set sum when considering multiple objects. Additivity is a weaker condition than σ-additivity; that is, σ-additivity implies additivity.
The term modular set function is equivalent to additive set function; see modularity below.
== Additive (or finitely additive) set functions ==
Let
μ
{\displaystyle \mu }
be a set function defined on an algebra of sets
A
{\displaystyle \scriptstyle {\mathcal {A}}}
with values in
[
−
∞
,
∞
]
{\displaystyle [-\infty ,\infty ]}
(see the extended real number line). The function
μ
{\displaystyle \mu }
is called additive or finitely additive, if whenever
A
{\displaystyle A}
and
B
{\displaystyle B}
are disjoint sets in
A
,
{\displaystyle \scriptstyle {\mathcal {A}},}
then
μ
(
A
∪
B
)
=
μ
(
A
)
+
μ
(
B
)
.
{\displaystyle \mu (A\cup B)=\mu (A)+\mu (B).}
A consequence of this is that an additive function cannot take both
−
∞
{\displaystyle -\infty }
and
+
∞
{\displaystyle +\infty }
as values, for the expression
∞
−
∞
{\displaystyle \infty -\infty }
is undefined.
One can prove by mathematical induction that an additive function satisfies
μ
(
⋃
n
=
1
N
A
n
)
=
∑
n
=
1
N
μ
(
A
n
)
{\displaystyle \mu \left(\bigcup _{n=1}^{N}A_{n}\right)=\sum _{n=1}^{N}\mu \left(A_{n}\right)}
for any
A
1
,
A
2
,
…
,
A
N
{\displaystyle A_{1},A_{2},\ldots ,A_{N}}
disjoint sets in
A
.
{\textstyle {\mathcal {A}}.}
== σ-additive set functions ==
Suppose that
A
{\displaystyle \scriptstyle {\mathcal {A}}}
is a σ-algebra. If for every sequence
A
1
,
A
2
,
…
,
A
n
,
…
{\displaystyle A_{1},A_{2},\ldots ,A_{n},\ldots }
of pairwise disjoint sets in
A
,
{\displaystyle \scriptstyle {\mathcal {A}},}
μ
(
⋃
n
=
1
∞
A
n
)
=
∑
n
=
1
∞
μ
(
A
n
)
,
{\displaystyle \mu \left(\bigcup _{n=1}^{\infty }A_{n}\right)=\sum _{n=1}^{\infty }\mu (A_{n}),}
holds then
μ
{\displaystyle \mu }
is said to be countably additive or 𝜎-additive.
Every 𝜎-additive function is additive but not vice versa, as shown below.
== τ-additive set functions ==
Suppose that in addition to a sigma algebra
A
,
{\textstyle {\mathcal {A}},}
we have a topology
τ
.
{\displaystyle \tau .}
If for every directed family of measurable open sets
G
⊆
A
∩
τ
,
{\textstyle {\mathcal {G}}\subseteq {\mathcal {A}}\cap \tau ,}
μ
(
⋃
G
)
=
sup
G
∈
G
μ
(
G
)
,
{\displaystyle \mu \left(\bigcup {\mathcal {G}}\right)=\sup _{G\in {\mathcal {G}}}\mu (G),}
we say that
μ
{\displaystyle \mu }
is
τ
{\displaystyle \tau }
-additive. In particular, if
μ
{\displaystyle \mu }
is inner regular (with respect to compact sets) then it is
τ
{\displaystyle \tau }
-additive.
== Properties ==
Useful properties of an additive set function
μ
{\displaystyle \mu }
include the following.
=== Value of empty set ===
Either
μ
(
∅
)
=
0
,
{\displaystyle \mu (\varnothing )=0,}
or
μ
{\displaystyle \mu }
assigns
∞
{\displaystyle \infty }
to all sets in its domain, or
μ
{\displaystyle \mu }
assigns
−
∞
{\displaystyle -\infty }
to all sets in its domain. Proof: additivity implies that for every set
A
,
{\displaystyle A,}
μ
(
A
)
=
μ
(
A
∪
∅
)
=
μ
(
A
)
+
μ
(
∅
)
{\displaystyle \mu (A)=\mu (A\cup \varnothing )=\mu (A)+\mu (\varnothing )}
(it's possible in the edge case of an empty domain that the only choice for
A
{\displaystyle A}
is the empty set itself, but that still works). If
μ
(
∅
)
≠
0
,
{\displaystyle \mu (\varnothing )\neq 0,}
then this equality can be satisfied only by plus or minus infinity.
=== Monotonicity ===
If
μ
{\displaystyle \mu }
is non-negative and
A
⊆
B
{\displaystyle A\subseteq B}
then
μ
(
A
)
≤
μ
(
B
)
.
{\displaystyle \mu (A)\leq \mu (B).}
That is,
μ
{\displaystyle \mu }
is a monotone set function. Similarly, If
μ
{\displaystyle \mu }
is non-positive and
A
⊆
B
{\displaystyle A\subseteq B}
then
μ
(
A
)
≥
μ
(
B
)
.
{\displaystyle \mu (A)\geq \mu (B).}
=== Modularity ===
A set function
μ
{\displaystyle \mu }
on a family of sets
S
{\displaystyle {\mathcal {S}}}
is called a modular set function and a valuation if whenever
A
,
{\displaystyle A,}
B
,
{\displaystyle B,}
A
∪
B
,
{\displaystyle A\cup B,}
and
A
∩
B
{\displaystyle A\cap B}
are elements of
S
,
{\displaystyle {\mathcal {S}},}
then
ϕ
(
A
∪
B
)
+
ϕ
(
A
∩
B
)
=
ϕ
(
A
)
+
ϕ
(
B
)
{\displaystyle \phi (A\cup B)+\phi (A\cap B)=\phi (A)+\phi (B)}
The above property is called modularity and the argument below proves that additivity implies modularity.
Given
A
{\displaystyle A}
and
B
,
{\displaystyle B,}
μ
(
A
∪
B
)
+
μ
(
A
∩
B
)
=
μ
(
A
)
+
μ
(
B
)
.
{\displaystyle \mu (A\cup B)+\mu (A\cap B)=\mu (A)+\mu (B).}
Proof: write
A
=
(
A
∩
B
)
∪
(
A
∖
B
)
{\displaystyle A=(A\cap B)\cup (A\setminus B)}
and
B
=
(
A
∩
B
)
∪
(
B
∖
A
)
{\displaystyle B=(A\cap B)\cup (B\setminus A)}
and
A
∪
B
=
(
A
∩
B
)
∪
(
A
∖
B
)
∪
(
B
∖
A
)
,
{\displaystyle A\cup B=(A\cap B)\cup (A\setminus B)\cup (B\setminus A),}
where all sets in the union are disjoint. Additivity implies that both sides of the equality equal
μ
(
A
∖
B
)
+
μ
(
B
∖
A
)
+
2
μ
(
A
∩
B
)
.
{\displaystyle \mu (A\setminus B)+\mu (B\setminus A)+2\mu (A\cap B).}
However, the related properties of submodularity and subadditivity are not equivalent to each other.
Note that modularity has a different and unrelated meaning in the context of complex functions; see modular form.
=== Set difference ===
If
A
⊆
B
{\displaystyle A\subseteq B}
and
μ
(
B
)
−
μ
(
A
)
{\displaystyle \mu (B)-\mu (A)}
is defined, then
μ
(
B
∖
A
)
=
μ
(
B
)
−
μ
(
A
)
.
{\displaystyle \mu (B\setminus A)=\mu (B)-\mu (A).}
== Examples ==
An example of a 𝜎-additive function is the function
μ
{\displaystyle \mu }
defined over the power set of the real numbers, such that
μ
(
A
)
=
{
1
if
0
∈
A
0
if
0
∉
A
.
{\displaystyle \mu (A)={\begin{cases}1&{\mbox{ if }}0\in A\\0&{\mbox{ if }}0\notin A.\end{cases}}}
If
A
1
,
A
2
,
…
,
A
n
,
…
{\displaystyle A_{1},A_{2},\ldots ,A_{n},\ldots }
is a sequence of disjoint sets of real numbers, then either none of the sets contains 0, or precisely one of them does. In either case, the equality
μ
(
⋃
n
=
1
∞
A
n
)
=
∑
n
=
1
∞
μ
(
A
n
)
{\displaystyle \mu \left(\bigcup _{n=1}^{\infty }A_{n}\right)=\sum _{n=1}^{\infty }\mu (A_{n})}
holds.
See measure and signed measure for more examples of 𝜎-additive functions.
A charge is defined to be a finitely additive set function that maps
∅
{\displaystyle \varnothing }
to
0.
{\displaystyle 0.}
(Cf. ba space for information about bounded charges, where we say a charge is bounded to mean its range is a bounded subset of R.)
=== An additive function which is not σ-additive ===
An example of an additive function which is not σ-additive is obtained by considering
μ
{\displaystyle \mu }
, defined over the Lebesgue sets of the real numbers
R
{\displaystyle \mathbb {R} }
by the formula
μ
(
A
)
=
lim
k
→
∞
1
k
⋅
λ
(
A
∩
(
0
,
k
)
)
,
{\displaystyle \mu (A)=\lim _{k\to \infty }{\frac {1}{k}}\cdot \lambda (A\cap (0,k)),}
where
λ
{\displaystyle \lambda }
denotes the Lebesgue measure and
lim
{\displaystyle \lim }
the Banach limit. It satisfies
0
≤
μ
(
A
)
≤
1
{\displaystyle 0\leq \mu (A)\leq 1}
and if
sup
A
<
∞
{\displaystyle \sup A<\infty }
then
μ
(
A
)
=
0.
{\displaystyle \mu (A)=0.}
One can check that this function is additive by using the linearity of the limit. That this function is not σ-additive follows by considering the sequence of disjoint sets
A
n
=
[
n
,
n
+
1
)
{\displaystyle A_{n}=[n,n+1)}
for
n
=
0
,
1
,
2
,
…
{\displaystyle n=0,1,2,\ldots }
The union of these sets is the positive reals, and
μ
{\displaystyle \mu }
applied to the union is then one, while
μ
{\displaystyle \mu }
applied to any of the individual sets is zero, so the sum of
μ
(
A
n
)
{\displaystyle \mu (A_{n})}
is also zero, which proves the counterexample.
== Generalizations ==
One may define additive functions with values in any additive monoid (for example any group or more commonly a vector space). For sigma-additivity, one needs in addition that the concept of limit of a sequence be defined on that set. For example, spectral measures are sigma-additive functions with values in a Banach algebra. Another example, also from quantum mechanics, is the positive operator-valued measure.
== See also ==
Additive map – Z-module homomorphism
Hahn–Kolmogorov theorem – Theorem extending pre-measures to measuresPages displaying short descriptions of redirect targets
Measure (mathematics) – Generalization of mass, length, area and volume
σ-finite measure – Concept in measure theory
Signed measure – Generalized notion of measure in mathematics
Submodular set function – Set-to-real map with diminishing returns
Subadditive set function
τ-additivity
ba space – The set of bounded charges on a given sigma-algebra
This article incorporates material from additive on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
== References == | Wikipedia/Sigma-additive_set_function |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.