text
stringlengths
13
991
where formula_2 is the latitudinal component of the particle's angular momentum, formula_3 is the energy of the particle, formula_4 is the particle's axial angular momentum, formula_5 is the rest mass of the particle, and formula_6 is the spin parameter of the black hole. Because functions of conserved quantities are also conserved, any function of formula_7 and the three other constants of the motion can be used as a fourth constant in place of formula_7. This results in some confusion as to the form of Carter's constant. For example it is sometimes more convenient to use:
in place of formula_7. The quantity formula_11 is useful because it is always non-negative. In general any fourth conserved quantity for motion in the Kerr family of spacetimes may be referred to as "Carter's constant".
Noether's theorem states that all conserved quantities are related to spacetime symmetries. Carter's constant is related to a higher order symmetry of the Kerr metric generated by a second order Killing tensor field formula_11 (different formula_11 than used above). In component form:
where formula_15 is the four-velocity of the particle in motion. The components of the Killing tensor in Boyer–Lindquist coordinates are:
where formula_17 are the components of the metric tensor and formula_18 and formula_19 are the components of the principal null vectors:
The spherical symmetry of the Schwarzschild metric for non-spinning black holes allows one to reduce the problem of finding the trajectories of particles to three dimensions. In this case one only needs formula_3, formula_4, and formula_5 to determine the motion; however, the symmetry leading to Carter's constant still exists. Carter's constant for Schwarzschild space is:
By a rotation of coordinates we can put any orbit in the formula_27 plane so formula_28. In this case formula_29, the square of the orbital angular momentum.
The Bohr–Kramers–Slater theory (BKS theory) was perhaps the final attempt at understanding the interaction of matter and electromagnetic radiation on the basis of the so-called old quantum theory, in which quantum phenomena are treated by imposing quantum restrictions on classically describable behaviour. It was advanced in 1924, and sticks to a "classical" wave description of the electromagnetic field. It was perhaps more a research program than a full physical theory, the ideas that are developed not being worked out in a quantitative way.
One aspect, the idea of modelling atomic behaviour under incident electromagnetic radiation using "virtual oscillators" at the absorption and emission frequencies, rather than the (different) apparent frequencies of the Bohr orbits, significantly led Born, Heisenberg and Kramers to explore mathematics that strongly inspired the subsequent development of matrix mechanics, the first form of modern quantum mechanics. The provocativeness of the theory also generated great discussion and renewed attention to the difficulties in the foundations of the old quantum theory. However, physically the most provocative element of the theory, that momentum and energy would not necessarily be conserved in each interaction but only overall, statistically, was soon shown to be in conflict with experiment.
The initial idea of the BKS theory originated with Slater, who proposed to Bohr and Kramers the following elements of a theory of emission and absorption of radiation by atoms, to be developed during his stay in Copenhagen:
Slater's main intention seems to have been to reconcile the two conflicting models of radiation, viz. the wave and particle models. He may have had good hopes that his idea with respect to oscillators vibrating at the "differences" of the frequencies of electron rotations (rather than at the rotation frequencies themselves) might be attractive to Bohr because it solved a problem of the latter's atomic model, even though the physical meaning of these oscillators was far from clear. Nevertheless, Bohr and Kramers had two objections to Slater's proposal:
As Max Jammer puts it, this refocussed the theory "to harmonize the physical picture of the continuous electromagnetic field with the physical picture, not as Slater had proposed of light quanta, but of the discontinuous quantum transitions in the atom." Bohr and Kramers hoped to be able to evade the photon hypothesis on the basis of ongoing work by Kramers to describe "dispersion" (in present-day terms inelastic scattering) of light by means of a classical theory of interaction of radiation and matter. But abandoning the concept of the photon, they instead chose to squarely accept the possibility of non-conservation of energy, and momentum.
In particle physics, flavour or flavor refers to the "species" of an elementary particle. The Standard Model counts six flavours of quarks and six flavours of leptons. They are conventionally parameterized with "flavour quantum numbers" that are assigned to all subatomic particles. They can also be described by some of the family symmetries proposed for the quark-lepton generations.
In classical mechanics, a force acting on a point-like particle can only alter the particle's dynamical state, i.e., its momentum, angular momentum, etc. Quantum field theory, however, allows interactions that can alter other facets of a particle's nature described by non dynamical, discrete quantum numbers. In particular, the action of the weak force is such that it allows the conversion of quantum numbers describing mass and electric charge of both quarks and leptons from one discrete type to another. This is known as a flavour change, or flavour transmutation. Due to their quantum description, flavour states may also undergo quantum superposition.
In atomic physics the principal quantum number of an electron specifies the electron shell in which it resides, which determines the energy level of the whole atom. Analogously, the five flavour quantum numbers (isospin, strangeness, charm, bottomness or topness) can characterize the quantum state of quarks, by the degree to which it exhibits six distinct flavours (u, d, s, c, b, t).
Composite particles can be created from multiple quarks, forming hadrons, such as mesons and baryons, each possessing unique aggregate characteristics, such as different masses, electric charges, and decay modes. A hadron's overall flavour quantum numbers depend on the numbers of constituent quarks of each particular flavour.
All of the various charges discussed above are conserved by the fact that the corresponding charge operators can be understood as "generators of symmetries" that commute with the Hamiltonian. Thus, the eigenvalues of the various charge operators are conserved.
Absolutely conserved flavour quantum numbers in the Standard Model are:
In some theories, such as the grand unified theory, the individual baryon and lepton number conservation can be violated, if the difference between them () is conserved (see chiral anomaly).
Strong interactions conserve all flavours, but all flavour quantum numbers (other than and ) are violated (changed, non-conserved) by electroweak interactions.
If there are two or more particles which have identical interactions, then they may be interchanged without affecting the physics. Any (complex) linear combination of these two particles give the same physics, as long as the combinations are orthogonal, or perpendicular, to each other.
In other words, the theory possesses symmetry transformations such as formula_1, where and are the two fields (representing the various "generations" of leptons and quarks, see below), and is any unitary matrix with a unit determinant. Such matrices form a Lie group called SU(2) (see special unitary group). This is an example of flavour symmetry.
In quantum chromodynamics, flavour is a conserved global symmetry. In the electroweak theory, on the other hand, this symmetry is broken, and flavour changing processes exist, such as quark decay or neutrino oscillations.
All leptons carry a lepton number . In addition, leptons carry weak isospin, , which is − for the three charged leptons (i.e. electron, muon and tau) and + for the three associated neutrinos. Each doublet of a charged lepton and a neutrino consisting of opposite are said to constitute one generation of leptons. In addition, one defines a quantum number called weak hypercharge, , which is −1 for all left-handed leptons. Weak isospin and weak hypercharge are gauged in the Standard Model.
Leptons may be assigned the six flavour quantum numbers: electron number, muon number, tau number, and corresponding numbers for the neutrinos. These are conserved in strong and electromagnetic interactions, but violated by weak interactions. Therefore, such flavour quantum numbers are not of great use. A separate quantum number for each generation is more useful: electronic lepton number (+1 for electrons and electron neutrinos), muonic lepton number (+1 for muons and muon neutrinos), and tauonic lepton number (+1 for tau leptons and tau neutrinos). However, even these numbers are not absolutely conserved, as neutrinos of different generations can mix; that is, a neutrino of one flavour can transform into another flavour. The strength of such mixings is specified by a matrix called the Pontecorvo–Maki–Nakagawa–Sakata matrix (PMNS matrix).
All quarks carry a baryon number and all anti-quarks have They also all carry weak isospin, The positively charged quarks (up, charm, and top quarks) are called "up-type quarks" and have the negatively charged quarks (down, strange, and bottom quarks) are called "down-type quarks" and have Each doublet of up and down type quarks constitutes one generation of quarks.
For all the quark flavour quantum numbers listed below, the convention is that the flavour charge and the electric charge of a quark have the same sign. Thus any flavour carried by a charged meson has the same sign as its charge. Quarks have the following flavour quantum numbers:
These five quantum numbers, together with baryon number (which is not a flavour quantum number), completely specify numbers of all 6 quark flavours separately (as i.e. an antiquark is counted with the minus sign). They are conserved by both the electromagnetic and strong interactions (but not the weak interaction). From them can be built the derived quantum numbers:
The terms "strange" and "strangeness" predate the discovery of the quark, but continued to be used after its discovery for the sake of continuity (i.e. the strangeness of each type of hadron remained the same); strangeness of anti-particles being referred to as +1, and particles as −1 as per the original definition. Strangeness was introduced to explain the rate of decay of newly discovered particles, such as the kaon, and was used in the Eightfold Way classification of hadrons and in subsequent quark models. These quantum numbers are preserved under strong and electromagnetic interactions, but not under weak interactions.
For first-order weak decays, that is processes involving only one quark decay, these quantum numbers (e.g. charm) can only vary by 1, that is, for a decay involving a charmed quark or antiquark either as the incident particle or as a decay byproduct, likewise, for a decay involving a bottom quark or antiquark Since first-order processes are more common than second-order processes (involving two quark decays), this can be used as an approximate "selection rule" for weak decays.
A special mixture of quark flavours is an eigenstate of the weak interaction part of the Hamiltonian, so will interact in a particularly simple way with the W bosons (charged weak interactions violate flavour). On the other hand, a fermion of a fixed mass (an eigenstate of the kinetic and strong interaction parts of the Hamiltonian) is an eigenstate of flavour. The transformation from the former basis to the flavour-eigenstate/mass-eigenstate basis for quarks underlies the Cabibbo–Kobayashi–Maskawa matrix (CKM matrix). This matrix is analogous to the PMNS matrix for neutrinos, and quantifies flavour changes under charged weak interactions of quarks.
The CKM matrix allows for CP violation if there are at least three generations.
Flavour quantum numbers are additive. Hence antiparticles have flavour equal in magnitude to the particle but opposite in sign. Hadrons inherit their flavour quantum number from their valence quarks: this is the basis of the classification in the quark model. The relations between the hypercharge, electric charge and other flavour quantum numbers hold for hadrons as well as quarks.
Quantum chromodynamics (QCD) contains six flavours of quarks. However, their masses differ and as a result they are not strictly interchangeable with each other. The up and down flavours are close to having equal masses, and the theory of these two quarks possesses an approximate SU(2) symmetry (isospin symmetry).
Under some circumstances (for instance when the quark masses are much smaller than the chiral symmetry breaking scale of 250 MeV), the masses of quarks do not meaningfully contribute to the system's behavior, and can be ignored to zeroth approximation. The simplified behavior of flavour transformations can then be successfully modeled as acting independently on the left- and right-handed parts of each quark field. This approximate description of the flavour symmetry is described by a chiral group .
If all quarks had non-zero but equal masses, then this chiral symmetry is broken to the "vector symmetry" of the "diagonal flavour group" , which applies the same transformation to both helicities of the quarks. This reduction of symmetry is a form of "explicit symmetry breaking". The strength of explicit symmetry breaking is controlled by the current quark masses in QCD.
Even if quarks are massless, chiral flavour symmetry can be spontaneously broken if the vacuum of the theory contains a chiral condensate (as it does in low-energy QCD). This gives rise to an effective mass for the quarks, often identified with the valence quark mass in QCD.
Analysis of experiments indicate that the current quark masses of the lighter flavours of quarks are much smaller than the QCD scale, ΛQCD, hence chiral flavour symmetry is a good approximation to QCD for the up, down and strange quarks. The success of chiral perturbation theory and the even more naive chiral models spring from this fact. The valence quark masses extracted from the quark model are much larger than the current quark mass. This indicates that QCD has spontaneous chiral symmetry breaking with the formation of a chiral condensate. Other phases of QCD may break the chiral flavour symmetries in other ways.
Some of the historical events that led to the development of flavour symmetry are discussed in the article on isospin, the eightfold way (physics) and chiral symmetry. Chief among these would be the November Revolution (physics) in 1974, when the fourth (charm) quark was found.
Reciprocity in electrical networks is a property of a circuit that relates voltages and currents at two points. The reciprocity theorem states that the current at one point in a circuit due to a voltage at a second point is the same as the current at the second point due to the same voltage at the first. The reciprocity theorem is valid for almost all passive networks. The reciprocity theorem is a feature of a more general principle of reciprocity in electromagnetism.
If a current, formula_1, injected into port A produces a voltage, formula_2, at port B and formula_1 injected into port B produces formula_2 at port A, then the network is said to be reciprocal. Equivalently, reciprocity can be defined by the dual situation; applying voltage, formula_5, at port A producing current formula_6 at port B and formula_5 at port B producing current formula_6 at port A. In general, passive networks are reciprocal. Any network that consists entirely of ideal capacitances, inductances (including mutual inductances), and resistances, that is, elements that are linear and bilateral, will be reciprocal. However, passive components that are non-reciprocal do exist. Any component containing ferromagnetic material is likely to be non-reciprocal. Examples of passive components deliberately designed to be non-reciprocal include circulators and isolators.
The transfer function of a reciprocal network has the property that it is symmetrical about the main diagonal if expressed in terms of a z-parameter, y-parameter, or s-parameter matrix. A non-symmetrical matrix implies a non-reciprocal network. A symmetric matrix does not imply a symmetric network.
In some parametisations of networks, the representative matrix is not symmetrical for reciprocal networks. Common examples are h-parameters and ABCD-parameters, but they all have some other condition for reciprocity that can be calculated from the parameters. For h-parameters the condition is formula_9 and for the ABCD parameters it is formula_10. These representations mix voltages and currents in the same column vector and therefore do not even have matching units in transposed elements.
An example of reciprocity can be demonstrated using an asymmetrical resistive attenuator. An asymmetrical network is chosen as the example because a symmetrical network is fairly self-evidently reciprocal.
Injecting six amps into port 1 of this network produces 24 volts at port 2.
Injecting six amps into port 2 produces 24 volts at port 1.
Hence, the network is reciprocal. In this example, the port that is not injecting current is left open circuit. This is because a current generator applying zero current is an open circuit. If, on the other hand, one wished to apply voltages and measure the resulting current, then the port to which the voltage is not applied would be made short circuit. This is because a voltage generator applying zero volts is a short circuit.
Reciprocity of electrical networks is a special case of Lorentz reciprocity, but it can also be proven more directly from network theorems. This proof shows reciprocity for a two-node network in terms of its admittance matrix, and then shows reciprocity for a network with an arbitrary number of nodes by an induction argument. A linear network can be represented as a set of linear equations through nodal analysis. These equations can be expressed in the form of an admittance matrix,
If we further require that network is made up of passive, bilateral elements, then
since the admittance connected between nodes "j" and "k" is the same element as the admittance connected between nodes "k" and "j". The matrix is therefore symmetrical. For the case where formula_17 the matrix reduces to,
From which it can be seen that,
which is synonymous with the condition for reciprocity. In words, the ratio of the current at one port to the voltage at another is the same ratio if the ports being driven and measured are interchanged. Thus reciprocity is proven for the case of formula_17.
For the case of a matrix of arbitrary size, the order of the matrix can be reduced through node elimination. After eliminating the "s"th node, the new admittance matrix will have the form,
It can be seen that this new matrix is also symmetrical. Nodes can continue to be eliminated in this way until only a 2×2 symmetrical matrix remains involving the two nodes of interest. Since this matrix is symmetrical it is proved that reciprocity applies to a matrix of arbitrary size when one node is driven by a voltage and current measured at another. A similar process using the impedance matrix from mesh analysis demonstrates reciprocity where one node is driven by a current and voltage is measured at another.
The Extra Element Theorem (EET) is an analytic technique developed by R. D. Middlebrook for simplifying the process of deriving driving point and transfer functions for linear electronic circuits. Much like Thévenin's theorem, the extra element theorem breaks down one complicated problem into several simpler ones.
Driving point and transfer functions can generally be found using Kirchhoff's circuit laws. However several complicated equations may result that offer little insight into the circuit's behavior. Using the extra element theorem, a circuit element (such as a resistor) can be removed from a circuit and the desired driving point or transfer function found. By removing the element that most complicates the circuit (such as an element that creates feedback), the desired function can be easier to obtain. Next two correctional factors must be found and combined with the previously derived function to find the exact expression.
The general form of the extra element theorem is called the N-extra element theorem and allows multiple circuit elements to be removed at once.
The (single) extra element theorem expresses any transfer function as a product of the transfer function with that element removed and a correction factor. The correction factor term consists of the impedance of the extra element and two driving point impedances seen by the extra element: The double null injection driving point impedance and the single injection driving point impedance. Because an extra element can be removed in general by either short-circuiting or open-circuiting the element, there are two equivalent forms of the EET:
Where the Laplace-domain transfer functions and impedances in the above expressions are defined as follows: is the transfer function with the extra element present. is the transfer function with the extra element open-circuited. is the transfer function with the extra element short-circuited. is the impedance of the extra element. is the single-injection driving point impedance "seen" by the extra element. is the double-null-injection driving point impedance "seen" by the extra element.
The extra element theorem incidentally proves that any electric circuit transfer function can be expressed as no more than a bilinear function of any particular circuit element.
is found by making the input to the system's transfer function zero (short circuit a voltage source or open circuit a current source) and determining the impedance across the terminals to which the extra element will be connected with the extra element absent. This impedance is same as the Thévenin's equivalent impedance.
is found by replacing the extra element with a second test signal source (either current source or voltage source as appropriate). Then, is defined as the ratio of voltage across the terminals of this second test source to the current leaving its positive terminal when the output of the system's transfer function is nulled for any value of the primary input to the system's transfer function.
In practice, can be found from working backwards from the facts that the output of the transfer function is made zero and that the primary input to the transfer function is unknown. Then using conventional circuit analysis techniques to express both the voltage across the extra element test source's terminals, , and the current leaving the extra element test source's positive terminals, , and calculating formula_3. Although computation of is an unfamiliar process for many engineers, its expressions are often much simpler than those for because the nulling of the transfer function's output often leads to other voltages/currents in the circuit being zero, which may allow exclusion of certain components from analysis.
Special case with transfer function as a self-impedance.
As a special case, the EET can be used to find the input impedance of a network with the addition of an element designated as "extra". In this case, is same as the impedance of the input test current source signal made zero or equivalently with the input open circuited. Likewise, since the transfer function output signal can be considered to be the voltage at the input terminals, is found when the input voltage is zero i.e. the input terminals are short-circuited. Thus, for this particular application the EET can be written as:
Computing these three terms may seem like extra effort, but they are often easier to compute than the overall input impedance.
Consider the problem of finding formula_9 for the circuit in Figure 1 using the EET (note all component values are unity for simplicity). If the capacitor (gray shading) is denoted the extra element then
Calculating the impedance seen by the capacitor with the input shorted,
Calculating the impedance seen by the capacitor with the input open,
This problem was solved by calculating three simple driving point impedances by inspection.
The EET is also useful for analyzing single and multi-loop feedback amplifiers. In this case the EET can take the form of the asymptotic gain model.
The star-mesh transform, or star-polygon transform, is a mathematical circuit analysis technique to transform a resistive network into an equivalent network with one less node. The equivalence follows from the Schur complement identity applied to the Kirchhoff matrix of the network.
The equivalent impedance betweens nodes A and B is given by:
where formula_2 is the impedance between node A and the central node being removed.
The transform replaces "N" resistors with formula_3 resistors. For formula_4, the result is an increase in the number of resistors, so the transform has no general inverse without additional constraints.
It is possible, though not necessarily efficient, to transform an arbitrarily complex two-terminal resistive network into a single equivalent resistor by repeatedly applying the star-mesh transform to eliminate each non-terminal node.
Source transformation is the process of simplifying a circuit solution, especially with mixed sources, by transforming voltage sources into current sources, and vice versa, using Thévenin's theorem and Norton's theorem respectively.
Performing a source transformation consists of using Ohm's law to take an existing voltage source in series with a resistance, and replacing it with a current source in parallel with the same resistance, or vice versa. The transformed sources are considered identical and can be substituted for one another in a circuit.
Source transformations are not limited to resistive circuits. They can be performed on a circuit involving capacitors and inductors as well, by expressing circuit elements as impedances and sources in the frequency domain. In general, the concept of source transformation is an application of Thévenin's theorem to a current source, or Norton's theorem to a voltage source. However, this means that source transformation is bound by the same conditions as Thevenin's theorem and Norton's theorem; namely that the load behaves linearly, and does not contain dependent voltage or current sources.
Source transformations are easy to compute using Ohm's law. If there is a voltage source in series with an impedance, it is possible to find the value of the equivalent current source in parallel with the impedance by dividing the value of the voltage source by the value of the impedance. The converse also holds: if a current source in parallel with an impedance is present, multiplying the value of the current source with the value of the impedance provides the equivalent voltage source in series with the impedance. A visual example of a source transformation can be seen in Figure 1.
The transformation can be derived from the uniqueness theorem. In the present context, it implies that a black box with two terminals must have a unique well-defined relation between its voltage and current. It is readily to verify that the above transformation indeed gives the same V-I curve, and therefore the transformation is valid.
In the mathematical theory of bifurcations, a Hopf bifurcation is a critical point where a system's stability switches and a periodic solution arises. More accurately, it is a local bifurcation in which a fixed point of a dynamical system loses stability, as a pair of complex conjugate eigenvalues—of the linearization around the fixed point—crosses the complex plane imaginary axis. Under reasonably generic assumptions about the dynamical system, a small-amplitude limit cycle branches from the fixed point.
A Hopf bifurcation is also known as a Poincaré–Andronov–Hopf bifurcation, named after Henri Poincaré, Aleksandr Andronov and Eberhard Hopf.
The limit cycle is orbitally stable if a specific quantity called the first Lyapunov coefficient is negative, and the bifurcation is supercritical. Otherwise it is unstable and the bifurcation is subcritical.
The normal form of a Hopf bifurcation is:
Write: formula_2 The number "α" is called the first Lyapunov coefficient.
Hopf bifurcations occur in the Lotka–Volterra model of predator–prey interaction (known as paradox of enrichment), the Hodgkin–Huxley model for nerve membrane, the Selkov model of glycolysis, the Belousov–Zhabotinsky reaction, the Lorenz attractor, the Brusselator and Classical electromagnetism.
The phase portrait illustrating the Hopf bifurcation in the Selkov model is shown on the right.
In railway vehicle systems, Hopf bifurcation analysis is notably important. Conventionally a railway vehicle's stable motion at low speeds crosses over to unstable at high speeds. One aim of the nonlinear analysis of these systems is to perform an analytical investigation of bifurcation, nonlinear lateral stability and hunting behavior of rail vehicles on a tangent track, which uses the Bogoliubov method.
The appearance or the disappearance of a periodic orbit through a local change in the stability properties of a fixed point is known as the Hopf bifurcation. The following theorem works for fixed points with one pair of conjugate nonzero purely imaginary eigenvalues. It tells the conditions under which this bifurcation phenomenon occurs.
Theorem (see section 11.2 of ). Let formula_6 be the Jacobian of a continuous parametric dynamical system evaluated at a steady point formula_7. Suppose that all eigenvalues of formula_6 have negative real part except one conjugate nonzero purely imaginary pair formula_9. A "Hopf bifurcation" arises when these two eigenvalues cross the imaginary axis because of a variation of the system parameters.
Routh–Hurwitz criterion (section I.13 of ) gives necessary conditions so that a Hopf bifurcation occurs. Let us see how one can use concretely this idea.
Let formula_10 be Sturm series associated to a characteristic polynomial formula_11. They can be written in the form:
The coefficients formula_13 for formula_14 in formula_15 correspond to what is called Hurwitz determinants. Their definition is related to the associated Hurwitz matrix.
Proposition 1. If all the Hurwitz determinants formula_13 are positive, apart perhaps formula_17 then the associated Jacobian has no pure imaginary eigenvalues.
Proposition 2. If all Hurwitz determinants formula_13 (for all formula_14 in formula_20 are positive, formula_21 and formula_22 then all the eigenvalues of the associated Jacobian have negative real parts except a purely imaginary conjugate pair.
The conditions that we are looking for so that a Hopf bifurcation occurs (see theorem above) for a parametric continuous dynamical system are given by this last proposition.
Consider the classical Van der Pol oscillator written with ordinary differential equations:
The Jacobian matrix associated to this system follows:
The characteristic polynomial (in formula_25) of the linearization at (0,0) is equal to: