text
stringlengths
13
991
A variable structure system, or VSS, is a discontinuous nonlinear system of the form
where formula_2 is the state vector, formula_3 is the time variable, and formula_4 is a "piecewise continuous" function. Due to the "piecewise" continuity of these systems, they behave like different continuous nonlinear systems in different regions of their state space. At the boundaries of these regions, their dynamics switch abruptly. Hence, their "structure" "varies" over different parts of their state space.
The development of variable structure control depends upon methods of analyzing variable structure systems, which are special cases of hybrid dynamical systems.
The covariant formulation of classical electromagnetism refers to ways of writing the laws of classical electromagnetism (in particular, Maxwell's equations and the Lorentz force) in a form that is manifestly invariant under Lorentz transformations, in the formalism of special relativity using rectilinear inertial coordinate systems. These expressions both make it simple to prove that the laws of classical electromagnetism take the same form in any inertial coordinate system, and also provide a way to translate the fields and forces from one frame to another. However, this is not as general as Maxwell's equations in curved spacetime or non-rectilinear coordinate systems.
This article uses the classical treatment of tensors and Einstein summation convention throughout and the Minkowski metric has the form diag(+1, −1, −1, −1). Where the equations are specified as holding in a vacuum, one could instead regard them as the formulation of Maxwell's equations in terms of "total" charge and current.
For a more general overview of the relationships between classical electromagnetism and special relativity, including various conceptual implications of this picture, see Classical electromagnetism and special relativity.
Lorentz tensors of the following kinds may be used in this article to describe bodies or particles:
The signs in the following tensor analysis depend on the convention used for the metric tensor. The convention used here is , corresponding to the Minkowski metric tensor:
The electromagnetic tensor is the combination of the electric and magnetic fields into a covariant antisymmetric tensor whose entries are B-field quantities.
and the result of raising its indices is
where E is the electric field, B the magnetic field, and "c" the speed of light.
The four-current is the contravariant four-vector which combines electric charge density "ρ" and electric current density j:
The electromagnetic four-potential is a covariant four-vector containing the electric potential (also called the scalar potential) "ϕ" and magnetic vector potential (or vector potential) A, as follows:
The differential of the electromagnetic potential is
In the language of differential forms, which provides the generalisation to curved spacetimes, these are the components of a 1-form formula_16 and a 2-form formula_17 respectively. Here, "formula_18" is the exterior derivative and formula_19 the wedge product.
The electromagnetic stress–energy tensor can be interpreted as the flux density of the momentum four-vector, and is a contravariant symmetric tensor that is the contribution of the electromagnetic fields to the overall stress–energy tensor:
where "formula_21" is the electric permittivity of vacuum, "μ"0 is the magnetic permeability of vacuum, the Poynting vector is
and the Maxwell stress tensor is given by
The electromagnetic field tensor "F" constructs the electromagnetic stress–energy tensor "T" by the equation:
where "η" is the Minkowski metric tensor (with signature ). Notice that we use the fact that
In vacuum (or for the microscopic equations, not including macroscopic material descriptions), Maxwell's equations can be written as two tensor equations.
The two inhomogeneous Maxwell's equations, Gauss's Law and Ampère's law (with Maxwell's correction) combine into (with metric):
while the homogeneous equations – Faraday's law of induction and Gauss's law for magnetism combine to form formula_26, which may written using Levi-Civita duality as:
where "F""αβ" is the electromagnetic tensor, "J""α" is the four-current, "ε""αβγδ" is the Levi-Civita symbol, and the indices behave according to the Einstein summation convention.
Each of these tensor equations corresponds to four scalar equations, one for each value of "β".
Using the antisymmetric tensor notation and comma notation for the partial derivative (see Ricci calculus), the second equation can also be written more compactly as:
In the absence of sources, Maxwell's equations reduce to:
which is an electromagnetic wave equation in the field strength tensor.
The Lorenz gauge condition is a Lorentz-invariant gauge condition. (This can be contrasted with other gauge conditions such as the Coulomb gauge, which if it holds in one inertial frame will generally not hold in any other.) It is expressed in terms of the four-potential as follows:
In the Lorenz gauge, the microscopic Maxwell's equations can be written as:
Electromagnetic (EM) fields affect the motion of electrically charged matter: due to the Lorentz force. In this way, EM fields can be detected (with applications in particle physics, and natural occurrences such as in aurorae). In relativistic form, the Lorentz force uses the field strength tensor as follows.
Expressed in terms of coordinate time "t", it is:
where "p""α" is the four-momentum, "q" is the charge, and "x""β" is the position.
Expressed in frame-independent form, we have the four-force
where "u""β" is the four-velocity, and "τ" is the particle's proper time, which is related to coordinate time by .
The density of force due to electromagnetism, whose spatial part is the Lorentz force, is given by
and is related to the electromagnetic stress–energy tensor by
Using the Maxwell equations, one can see that the electromagnetic stress–energy tensor (defined above) satisfies the following differential equation, relating it to the electromagnetic tensor and the current four-vector
which expresses the conservation of linear momentum and energy by electromagnetic interactions.
In order to solve the equations of electromagnetism given here, it is necessary to add information about how to calculate the electric current, "J""ν" Frequently, it is convenient to separate the current into two parts, the free current and the bound current, which are modeled by different equations;
Maxwell's macroscopic equations have been used, in addition the definitions of the electric displacement D and the magnetic intensity H:
where M is the magnetization and P the electric polarization.
The bound current is derived from the P and M fields which form an antisymmetric contravariant magnetization-polarization tensor
If this is combined with "F"μν we get the antisymmetric contravariant electromagnetic displacement tensor which combines the D and H fields as follows:
The three field tensors are related by:
which is equivalent to the definitions of the D and H fields given above.
The bound current and free current as defined above are automatically and separately conserved
In vacuum, the constitutive relations between the field tensor and displacement tensor are:
Antisymmetry reduces these 16 equations to just six independent equations. Because it is usual to define "F""μν" by
the constitutive equations may, in "vacuum", be combined with the Gauss–Ampère law to get:
The electromagnetic stress–energy tensor in terms of the displacement is:
where "δαπ" is the Kronecker delta. When the upper index is lowered with "η", it becomes symmetric and is part of the source of the gravitational field.
Thus we have reduced the problem of modeling the current, "J""ν" to two (hopefully) easier problems — modeling the free current, "J""ν"free and modeling the magnetization and polarization, formula_55. For example, in the simplest materials at low frequencies, one has
where one is in the instantaneously comoving inertial frame of the material, "σ" is its electrical conductivity, "χ"e is its electric susceptibility, and "χ"m is its magnetic susceptibility.
The constitutive relations between the formula_59 and "F" tensors, proposed by Minkowski for a linear materials (that is, E is proportional to D and B proportional to H), are:
where "u" is the four-velocity of material, "ε" and "μ" are respectively the proper permittivity and permeability of the material (i.e. in rest frame of material), formula_62 and denotes the Hodge dual.
The Lagrangian density for classical electrodynamics is composed by two components: a field component and a source component:
In the interaction term, the four-current should be understood as an abbreviation of many terms expressing the electric currents of other charged fields in terms of their variables; the four-current is not itself a fundamental field.
The Lagrange equations for the electromagnetic lagrangian density formula_64 can be stated as follows:
the expression inside the square bracket is
Therefore, the electromagnetic field's equations of motion are
Separating the free currents from the bound currents, another way to write the Lagrangian density is as follows:
Using Lagrange equation, the equations of motion for formula_73 can be derived.
The equivalent expression in non-relativistic vector notation is
In continuum mechanics, the generalized Lagrangian mean (GLM) is a formalism – developed by – to unambiguously split a motion into a mean part and an oscillatory part. The method gives a mixed Eulerian–Lagrangian description for the flow field, but appointed to fixed Eulerian coordinates.
In general, it is difficult to decompose a combined wave–mean motion into a mean and a wave part, especially for flows bounded by a wavy surface: e.g. in the presence of surface gravity waves or near another undulating bounding surface (like atmospheric flow over mountainous or hilly terrain). However, this splitting of the motion in a wave and mean part is often demanded in mathematical models, when the main interest is in the mean motion – slowly varying at scales much larger than those of the individual undulations. From a series of postulates, arrive at the (GLM) formalism to split the flow: into a generalised Lagrangian mean flow and an oscillatory-flow part.
The GLM method does not suffer from the strong drawback of the Lagrangian specification of the flow field – following individual fluid parcels – that Lagrangian positions which are initially close gradually drift far apart. In the Lagrangian frame of reference, it therefore becomes often difficult to attribute Lagrangian-mean values to some location in space.
The specification of mean properties for the oscillatory part of the flow, like: Stokes drift, wave action, pseudomomentum and pseudoenergy – and the associated conservation laws – arise naturally when using the GLM method.
The GLM concept can also be incorporated into variational principles of fluid flow.
The system size expansion, also known as van Kampen's expansion or the Ω-expansion, is a technique pioneered by Nico van Kampen used in the analysis of stochastic processes. Specifically, it allows one to find an approximation to the solution of a master equation with nonlinear transition rates. The leading order term of the expansion is given by the linear noise approximation, in which the master equation is approximated by a Fokker–Planck equation with linear coefficients determined by the transition rates and stoichiometry of the system.
Less formally, it is normally straightforward to write down a mathematical description of a system where processes happen randomly (for example, radioactive atoms randomly decay in a physical system, or genes that are expressed stochastically in a cell). However, these mathematical descriptions are often too difficult to solve for the study of the systems statistics (for example, the mean and variance of the number of atoms or proteins as a function of time). The system size expansion allows one to obtain an approximate statistical description that can be solved much more easily than the master equation.
Systems that admit a treatment with the system size expansion may be described by a probability distribution formula_1, giving the probability of observing the system in state formula_2 at time formula_3. formula_2 may be, for example, a vector with elements corresponding to the number of molecules of different chemical species in a system. In a system of size formula_5 (intuitively interpreted as the volume), we will adopt the following nomenclature: formula_6 is a vector of macroscopic copy numbers, formula_7 is a vector of concentrations, and formula_8 is a vector of deterministic concentrations, as they would appear according to the rate equation in an infinite system. formula_9 and formula_6 are thus quantities subject to stochastic effects.
A master equation describes the time evolution of this probability. Henceforth, a system of chemical reactions will be discussed to provide a concrete example, although the nomenclature of "species" and "reactions" is generalisable. A system involving formula_11 species and formula_12 reactions can be described with the master equation:
Here, formula_5 is the system size, formula_15 is an operator which will be addressed later, formula_16 is the stoichiometric matrix for the system (in which element formula_16 gives the stoichiometric coefficient for species formula_18 in reaction formula_19), and formula_20 is the rate of reaction formula_19 given a state formula_9 and system size formula_5.
formula_24 is a step operator, removing formula_16 from the formula_18th element of its argument. For example, formula_27. This formalism will be useful later.
The above equation can be interpreted as follows. The initial sum on the RHS is over all reactions. For each reaction formula_19, the brackets immediately following the sum give two terms. The term with the simple coefficient −1 gives the probability flux away from a given state formula_6 due to reaction formula_19 changing the state. The term preceded by the product of step operators gives the probability flux due to reaction formula_19 changing a different state formula_32 into state formula_6. The product of step operators constructs this state formula_32.
For example, consider the (linear) chemical system involving two chemical species formula_35 and formula_36 and the reaction formula_37. In this system, formula_38 (species), formula_39 (reactions). A state of the system is a vector formula_40, where formula_41 are the number of molecules of formula_35 and formula_36 respectively. Let formula_44, so that the rate of reaction 1 (the only reaction) depends on the concentration of formula_35. The stoichiometry matrix is formula_46.
where formula_48 is the shift caused by the action of the product of step operators, required to change state formula_6 to a precursor state formula_50.
If the master equation possesses nonlinear transition rates, it may be impossible to solve it analytically. The system size expansion utilises the ansatz that the variance of the steady-state probability distribution of constituent numbers in a population scales like the system size. This ansatz is used to expand the master equation in terms of a small parameter given by the inverse system size.
Specifically, let us write the formula_51, the copy number of component formula_18, as a sum of its "deterministic" value (a scaled-up concentration) and a random variable formula_53, scaled by formula_54:
The probability distribution of formula_6 can then be rewritten in the vector of random variables formula_53:
Consider how to write reaction rates formula_59 and the step operator formula_15 in terms of this new random variable. Taylor expansion of the transition rates gives:
The step operator has the effect formula_62 and hence formula_63:
We are now in a position to recast the master equation.
This rather frightening expression makes a bit more sense when we gather terms in different powers of formula_5. First, terms of order formula_54 give
These terms cancel, due to the macroscopic reaction equation
The terms of order formula_70 are more interesting:
The time evolution of formula_75 is then governed by the linear Fokker–Planck equation with coefficient matrices formula_76 and formula_77 (in the large-formula_5 limit, terms of formula_79 may be neglected, termed the linear noise approximation). With knowledge of the reaction rates formula_80 and stoichiometry formula_81, the moments of formula_75 can then be calculated.
The approximation implies that fluctuations around the mean are Gaussian distributed. Non-Gaussian features of the distributions can be computed by taking into account higher order terms in the expansion.
The wave equation is an important second-order linear partial differential equation for the description of waves—as they occur in classical physics—such as mechanical waves (e.g. water waves, sound waves and seismic waves) or light waves. It arises in fields like acoustics, electromagnetics, and fluid dynamics.
Historically, the problem of a vibrating string such as that of a musical instrument was studied by Jean le Rond d'Alembert, Leonhard Euler, Daniel Bernoulli, and Joseph-Louis Lagrange. In 1746, d’Alembert discovered the one-dimensional wave equation, and within ten years Euler discovered the three-dimensional wave equation.
The wave equation is a partial differential equation that may constrain some scalar function of a time variable and one or more spatial variables . The quantity may be, for example, the pressure in a liquid or gas, or the displacement, along some specific direction, of the particles of a vibrating solid away from their resting positions. The equation is
where is a fixed non-negative real coefficient.
Using the notations of Newtonian mechanics and vector calculus, the wave equation can be written more compactly as
where the double dot denotes double time derivative of , is the nabla operator, and is the (spatial) Laplacian operator:
A solution of this equation can be quite complicated, but it can be analyzed as a linear combination of simple solutions that are sinusoidal plane waves with various directions of propagation and wavelengths but all with the same propagation speed . This analysis is possible because the wave equation is linear; so that any multiple of a solution is also a solution, and the sum of any two solutions is again a solution. This property is called the superposition principle in physics.
The wave equation alone does not specify a physical solution; a unique solution is usually obtained by setting a problem with further conditions, such as initial conditions, which prescribe the amplitude and phase of the wave. Another important class of problems occurs in enclosed spaces specified by boundary conditions, for which the solutions represent standing waves, or harmonics, analogous to the harmonics of musical instruments.
The wave equation is the simplest example of a hyperbolic differential equation. It, and its modifications, play fundamental roles in continuum mechanics, quantum mechanics, plasma physics, general relativity, geophysics, and many other scientific and technical disciplines.
The wave equation in one space dimension can be written as follows:
This equation is typically described as having only one space dimension , because the only other independent variable is the time . Nevertheless, the dependent variable may represent a second space dimension, if, for example, the displacement takes place in -direction, as in the case of a string that is located in the .