text stringlengths 13 991 |
|---|
where formula_3 is a matrix whose matrix elements are formula_162. |
For the sigma model on a symmetric space, as opposed to a Lie group, the formula_163 are limited to span the subspace formula_164 instead of all of formula_146. The Lie commutator on formula_164 will "not" be within formula_164; indeed, one has formula_168 and so a projection is still needed. |
The model can be extended in a variety of ways. Besides the aforementioned Skyrme model, which introduces quartic terms, the model may be augmented by a torsion term to yield the Wess–Zumino–Witten model. |
Another possibility is frequently seen in supergravity models. Here, one notes that the Maurer-Cartan form formula_169 looks like "pure gauge". In the construction above for symmetric spaces, one can also consider the other projection |
where, as before, the symmetric space corresponded to the split formula_171. This extra term can be interpreted as a connection on the fiber bundle formula_172 (it transforms as a gauge field). It is what is "left over" from the connection on formula_137. It can be endowed with its own dynamics, by writing |
with formula_175. Note that the differential here is just "d", and not a covariant derivative; this is "not" the Yang-Mills stress-energy tensor. This term is not gauge invariant by itself; it must be taken together with the part of the connection that embeds into formula_176, so that taken together, the formula_176, now with the connection as a part of it, together with this term, forms a complete gauge invariant Lagrangian (which does have the Yang-Mills terms in it, when expanded out). |
In physics, there are equations in every field to relate physical quantities to each other and perform calculations. Entire handbooks of equations can only summarize most of the full subject, else are highly specialized within a certain field. Physics is derived of formulae only. |
In physics, chemistry and related fields, master equations are used to describe the time evolution of a system that can be modelled as being in a probabilistic combination of states at any given time and the switching between states is determined by a transition rate matrix. The equations are a set of differential equations – over time – of the probabilities that the system occupies each of the different states. |
A master equation is a phenomenological set of first-order differential equations describing the time evolution of (usually) the probability of a system to occupy each one of a discrete set of states with regard to a continuous time variable "t". The most familiar form of a master equation is a matrix form: |
where formula_2 is a column vector (where element "i" represents state "i"), and formula_3 is the matrix of connections. The way connections among states are made determines the dimension of the problem; it is either |
When the connections are time-independent rate constants, the master equation represents a kinetic scheme, and the process is Markovian (any jumping time probability density function for state "i" is an exponential, with a rate equal to the value of the connection). When the connections depend on the actual time (i.e. matrix formula_3 depends on the time, formula_5 ), the process is not stationary and the master equation reads |
When the connections represent multi exponential jumping time probability density functions, the process is semi-Markovian, and the equation of motion is an integro-differential equation termed the generalized master equation: |
The matrix formula_3 can also represent birth and death, meaning that probability is injected (birth) or taken from (death) the system, where then, the process is not in equilibrium. |
Detailed description of the matrix and properties of the system. |
Let formula_3 be the matrix describing the transition rates (also known as kinetic rates or reaction rates). As always, the first subscript represents the row, the second subscript the column. That is, the source is given by the second subscript, and the destination by the first subscript. This is the opposite of what one might expect, but it is technically convenient. |
For each state "k", the increase in occupation probability depends on the contribution from all other states to "k", and is given by: |
where formula_11 is the probability for the system to be in the state formula_12, while the matrix formula_3 is filled with a grid of transition-rate constants. Similarly, formula_14 contributes to the occupation of all other states formula_15 |
In probability theory, this identifies the evolution as a continuous-time Markov process, with the integrated master equation obeying a Chapman–Kolmogorov equation. |
The master equation can be simplified so that the terms with "ℓ" = "k" do not appear in the summation. This allows calculations even if the main diagonal of the formula_3 is not defined or has been assigned an arbitrary value. |
The final equality arises from the fact that |
because the summation over the probabilities formula_20 yields one, a constant function. Since this has to hold for any probability formula_2 (and in particular for any probability of the form formula_22 for some k) we get |
Using this we can write the diagonal elements as |
The master equation exhibits detailed balance if each of the terms of the summation disappears separately at equilibrium—i.e. if, for all states "k" and "ℓ" having equilibrium probabilities formula_25 and formula_26, |
These symmetry relations were proved on the basis of the time reversibility of microscopic dynamics (microscopic reversibility) as Onsager reciprocal relations. |
Many physical problems in classical, quantum mechanics and problems in other sciences, can be reduced to the form of a "master equation", thereby performing a great simplification of the problem (see mathematical model). |
The Lindblad equation in quantum mechanics is a generalization of the master equation describing the time evolution of a density matrix. Though the Lindblad equation is often referred to as a "master equation", it is not one in the usual sense, as it governs not only the time evolution of probabilities (diagonal elements of the density matrix), but also of variables containing information about quantum coherence between the states of the system (non-diagonal elements of the density matrix). |
Another special case of the master equation is the Fokker–Planck equation which describes the time evolution of a continuous probability distribution. Complicated master equations which resist analytic treatment can be cast into this form (under various approximations), by using approximation techniques such as the system size expansion. |
Stochastic chemical kinetics are yet another example of the Master equation. A chemical Master equation is used to model a set of chemical reactions when the number of molecules of one or more species is small (of the order of 100 or 1000 molecules). |
A quantum master equation is a generalization of the idea of a master equation. Rather than just a system of differential equations for a set of probabilities (which only constitutes the diagonal elements of a density matrix), quantum master equations are differential equations for the entire density matrix, including off-diagonal elements. A density matrix with only diagonal elements can be modeled as a classical random process, therefore such an "ordinary" master equation is considered classical. Off-diagonal elements represent quantum coherence which is a physical characteristic that is intrinsically quantum mechanical. |
The Redfield equation and Lindblad equation are examples of approximate quantum master equations assumed to be Markovian. More accurate quantum master equations for certain applications include the polaron transformed quantum master equation, and the VPQME (variational polaron transformed quantum master equation). |
In a more precise sense, the PDF is used to specify the probability of the random variable falling "within a particular range of values", as opposed to taking on any one value. This probability is given by the integral of this variable's PDF over that range—that is, it is given by the area under the density function but above the horizontal axis and between the lowest and greatest values of the range. The probability density function is nonnegative everywhere, and its integral over the entire space is equal to 1. |
The terms ""probability distribution function" and "probability function"" have also sometimes been used to denote the probability density function. However, this use is not standard among probabilists and statisticians. In other sources, "probability distribution function" may be used when the probability distribution is defined as a function over general sets of values or it may refer to the cumulative distribution function, or it may be a probability mass function (PMF) rather than the density. "Density function" itself is also used for the probability mass function, leading to further confusion. In general though, the PMF is used in the context of discrete random variables (random variables that take values on a countable set), while the PDF is used in the context of continuous random variables. |
Suppose bacteria of a certain species typically live 4 to 6 hours. The probability that a bacterium lives 5 hours is equal to zero. A lot of bacteria live for approximately 5 hours, but there is no chance that any given bacterium dies at exactly 5.0000000000... hours. However, the probability that the bacterium dies between 5 hours and 5.01 hours is quantifiable. Suppose the answer is 0.02 (i.e., 2%). Then, the probability that the bacterium dies between 5 hours and 5.001 hours should be about 0.002, since this time interval is one-tenth as long as the previous. The probability that the bacterium dies between 5 hours and 5.0001 hours should be about 0.0002, and so on. |
There is a probability density function "f" with "f"(5 hours) = 2 hour−1. The integral of "f" over any window of time (not only infinitesimal windows but also large windows) is the probability that the bacterium dies in that window. |
A probability density function is most commonly associated with absolutely continuous univariate distributions. A random variable formula_1 has density formula_2, where formula_2 is a non-negative Lebesgue-integrable function, if: |
Hence, if formula_5 is the cumulative distribution function of formula_1, then: |
and (if formula_2 is continuous at formula_9) |
Intuitively, one can think of formula_11 as being the probability of formula_1 falling within the infinitesimal interval formula_13. |
A random variable formula_1 with values in a measurable space formula_15 (usually formula_16 with the Borel sets as measurable subsets) has as probability distribution the measure "X"∗"P" on formula_15: the density of formula_1 with respect to a reference measure formula_19 on formula_15 is the Radon–Nikodym derivative: |
That is, "f" is any measurable function with the property that: |
In the continuous univariate case above, the reference measure is the Lebesgue measure. The probability mass function of a discrete random variable is the density with respect to the counting measure over the sample space (usually the set of integers, or some subset thereof). |
It is not possible to define a density with reference to an arbitrary measure (e.g. one can't choose the counting measure as a reference for a continuous random variable). Furthermore, when it does exist, the density is almost everywhere unique. |
Unlike a probability, a probability density function can take on values greater than one; for example, the uniform distribution on the interval [0, ] has probability density "f"("x") = 2 for 0 ≤ "x" ≤ and "f"("x") = 0 elsewhere. |
The standard normal distribution has probability density |
If a random variable "X" is given and its distribution admits a probability density function "f", then the expected value of "X" (if the expected value exists) can be calculated as |
Not every probability distribution has a density function: the distributions of discrete random variables do not; nor does the Cantor distribution, even though it has no discrete component, i.e., does not assign positive probability to any individual point. |
A distribution has a density function if and only if its cumulative distribution function "F"("x") is absolutely continuous. In this case: "F" is almost everywhere differentiable, and its derivative can be used as probability density: |
If a probability distribution admits a density, then the probability of every one-point set {"a"} is zero; the same holds for finite and countable sets. |
Two probability densities "f" and "g" represent the same probability distribution precisely if they differ only on a set of Lebesgue measure zero. |
In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following: |
If "dt" is an infinitely small number, the probability that "X" is included within the interval ("t", "t" + "dt") is equal to "f"("t") "dt", or: |
It is possible to represent certain discrete random variables as well as random variables involving both a continuous and a discrete part with a generalized probability density function, by using the Dirac delta function. (This is not possible with a probability density function in the sense defined above, it may be done with a distribution.) For example, consider a binary discrete random variable having the Rademacher distribution—that is, taking −1 or 1 for values, with probability ½ each. The density of probability associated with this variable is: |
More generally, if a discrete variable can take "n" different values among real numbers, then the associated probability density function is: |
where formula_30 are the discrete values accessible to the variable and formula_31 are the probabilities associated with these values. |
This substantially unifies the treatment of discrete and continuous probability distributions. For instance, the above expression allows for determining statistical characteristics of such a discrete variable (such as its mean, its variance and its kurtosis), starting from the formulas given for a continuous distribution of the probability... |
It is common for probability density functions (and probability mass functions) to |
be parametrized—that is, to be characterized by unspecified parameters. For example, the normal distribution is parametrized in terms of the mean and the variance, denoted by formula_19 and formula_33 respectively, giving the family of densities |
Since the parameters are constants, reparametrizing a density in terms of different parameters, to give a characterization of a different random variable in the family, means simply substituting the new parameter values into the formula in place of the old ones. Changing the domain of a probability density, however, is trickier and requires more work: see the section below on change of variables. |
For continuous random variables "X"1, …, "Xn", it is also possible to define a probability density function associated to the set as a whole, often called joint probability density function. This density function is defined as a function of the "n" variables, such that, for any domain "D" in the "n"-dimensional space of the values of the variables "X"1, …, "Xn", the probability that a realisation of the set variables falls inside the domain "D" is |
If "F"("x"1, …, "x""n") = Pr("X"1 ≤ "x"1, …, "X""n" ≤ "x""n") is the cumulative distribution function of the vector ("X"1, …, "X""n"), then the joint probability density function can be computed as a partial derivative |
For "i" = 1, 2, …, "n", let "f""X""i"("x""i") be the probability density function associated with variable "Xi" alone. This is called the marginal density function, and can be deduced from the probability density associated with the random variables "X"1, …, "Xn" by integrating over all values of the other "n" − 1 variables: |
Continuous random variables "X"1, …, "Xn" admitting a joint density are all independent from each other if and only if |
If the joint probability density function of a vector of "n" random variables can be factored into a product of "n" functions of one variable |
(where each "fi" is not necessarily a density) then the "n" variables in the set are all independent from each other, and the marginal probability density function of each of them is given by |
This elementary example illustrates the above definition of multidimensional probability density functions in the simple case of a function of a set of two variables. Let us call formula_41 a 2-dimensional random vector of coordinates ("X", "Y"): the probability to obtain formula_41 in the quarter plane of positive "x" and "y" is |
Function of random variables and change of variables in the probability density function. |
If the probability density function of a random variable (or vector) "X" is given as "fX"("x"), it is possible (but often not necessary; see below) to calculate the probability density function of some variable . This is also called a “change of variable” and is in practice used to generate a random variable of arbitrary shape using a known (for instance, uniform) random number generator. |
It is tempting to think that in order to find the expected value "E"("g"("X")), one must first find the probability density "f""g"("X") of the new random variable . However, rather than computing |
The values of the two integrals are the same in all cases in which both "X" and "g"("X") actually have probability density functions. It is not necessary that "g" be a one-to-one function. In some cases the latter integral is computed much more easily than the former. See Law of the unconscious statistician. |
Let formula_46 be a monotonic function, then the resulting density function is |
This follows from the fact that the probability contained in a differential area must be invariant under change of variables. That is, |
For functions that are not monotonic, the probability density function for "y" is |
where "n"("y") is the number of solutions in "x" for the equation formula_51, and formula_52 are these solutions. |
Suppose x is an "n"-dimensional random variable with joint density "f". If , where "H" is a bijective, differentiable function, then "y" has density "g": |
with the differential regarded as the Jacobian of the inverse of "H"(⋅), evaluated at y. |
For example, in the 2-dimensional case x = ("x"1, "x"2), suppose the transform "H" is given as "y"1 = "H"1("x"1, "x"2), "y"2 = "H"2("x"1, "x"2) with inverses "x"1 = "H"1−1("y"1, "y"2), "x"2 = "H"2−1("y"1, "y"2). The joint distribution for y = ("y"1, y2) has density |
Let formula_55 be a differentiable function and formula_56 be a random vector taking values in formula_57, formula_58 be the probability density function of formula_56 and formula_60 be the Dirac delta function. It is possible to use the formulas above to determine formula_61, the probability density function of formula_62, which will be given by |
This result leads to the law of the unconscious statistician: |
which is an upper triangular matrix with ones on the main diagonal, therefore its determinant is 1. Applying the change of variable theorem from the previous section we obtain that |
which if marginalized over formula_9 leads to the desired probability density function. |
The probability density function of the sum of two independent random variables "U" and "V", each of which has a probability density function, is the convolution of their separate density functions: |
It is possible to generalize the previous relation to a sum of N independent random variables, with densities "U"1, …, "UN": |
This can be derived from a two-way change of variables involving "Y=U+V" and "Z=V", similarly to the example below for the quotient of independent random variables. |
Products and quotients of independent random variables. |
Given two independent random variables "U" and "V", each of which has a probability density function, the density of the product "Y" = "UV" and quotient "Y"="U"/"V" can be computed by a change of variables. |
To compute the quotient "Y" = "U"/"V" of two independent random variables "U" and "V", define the following transformation: |
Then, the joint density "p"("y","z") can be computed by a change of variables from "U,V" to "Y,Z", and "Y" can be derived by marginalizing out "Z" from the joint density. |
The Jacobian matrix formula_73 of this transformation is |
And the distribution of "Y" can be computed by marginalizing out "Z": |
This method crucially requires that the transformation from "U","V" to "Y","Z" be bijective. The above transformation meets this because "Z" can be mapped directly back to "V", and for a given "V" the quotient "U"/"V" is monotonic. This is similarly the case for the sum "U" + "V", difference "U" − "V" and product "UV". |
Exactly the same method can be used to compute the distribution of other functions of multiple independent random variables. |
Given two standard normal variables "U" and "V", the quotient can be computed as follows. First, the variables have the following density functions: |
This is the density of a standard Cauchy distribution. |
This article summarizes equations in the theory of electromagnetism. |
Here subscripts "e" and "m" are used to differ between electric and magnetic charges. The definitions for monopoles are of theoretical interest, although real magnetic dipoles can be described using pole strengths. There are two possible units for monopole strength, Wb (Weber) and A m (Ampere metre). Dimensional analysis shows that magnetic charges relate by "qm"(Wb) = "μ"0 "qm"(Am). |
Contrary to the strong analogy between (classical) gravitation and electrostatics, there are no "centre of charge" or "centre of electrostatic attraction" analogues. |
Below "N" = number of conductors or circuit components. Subcript "net" refers to the equivalent and resultant property value. |
Microplane model for constitutive laws of materials |
The basic idea of the microplane model is to express the constitutive law not in terms of tensors, but in terms of the vectors of stress and strain acting on planes of various orientations called the microplanes. The use of vectors was inspired by G. I. Taylor's idea in 1938 which led to Taylor models for plasticity of polycrystalline metals. But the microplane models differ conceptually in two ways. |
Firstly, to prevent model instability in post-peak softening damage, the kinematic constraint must be used instead of the static one. Thus, the strain (rather than stress) vector on each microplane is the projection of the macroscopic strain tensor, i.e., |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.